Abstract
The recent rapid advancement of Text-to-Video (T2V) generation technologies, which are critical to build ``world models'', makes the existing benchmarks increasingly insufficient to evaluate state-of-the-art T2V models. First, current evaluation dimensions, such as per-frame aesthetic quality and temporal consistency, are no longer able to differentiate state-of-the-art T2V models. Second, event-level temporal causality, which not only distinguishes video from other modalities but also constitutes a crucial component of world models, is severely underexplored in existing benchmarks. Third, existing benchmarks lack a systematic assessment of world knowledge, which are essential capabilities for building world models. To address these issues, we introduce VideoVerse, a comprehensive benchmark that focuses on evaluating whether a T2V model could understand complex temporal causality and world knowledge in the real world. We collect representative videos across diverse domains (e.g., natural landscapes, sports, indoor scenes, science fiction, chemical and physical experiments) and extract their event-level descriptions with inherent temporal causality, which are then rewritten into text-to-video prompts by independent annotators. For each prompt, we design a suite of binary evaluation questions from the perspective of dynamic and static properties, with a total of ten carefully defined evaluation dimensions. In total, our VideoVerse comprises 300 carefully curated prompts, involving 815 events and 793 binary evaluation questions. Consequently, a human preference aligned QA-based evaluation pipeline is developed by using modern vision-language models. Finally, we perform a systematic evaluation of state-of-the-art open-source and closed-source T2V models on VideoVerse, providing in-depth analysis on how far the current T2V generators are from world models.
Abstract (translated)
近期,文本到视频(Text-to-Video,T2V)生成技术的快速进步对于构建“世界模型”至关重要,这使得现有的评估基准越来越不足以评价最先进的T2V模型。首先,当前的评估维度,如帧级别的美学质量和时间一致性,已不再能区分出先进的T2V模型。其次,事件级的时间因果关系不仅是视频与其他模态的区别所在,也是构成世界模型的重要组成部分,然而在现有基准中这一方面严重被忽视。第三,现有的基准缺乏对世界知识系统的评估能力,而这对于构建世界模型来说是至关重要的。 为了解决这些问题,我们引入了VideoVerse,这是一个全面的基准测试平台,专注于评估T2V模型是否能够理解真实世界的复杂时间因果关系和世界知识。我们从不同领域(例如自然景观、体育赛事、室内场景、科幻、化学和物理实验)收集了一系列具有代表性的视频,并提取出其中包含内在时间因果关系的事件级别的描述。然后这些描述被独立标注者改写成了文本到视频的任务指令。对于每个任务指令,我们设计了一套从动态和静态属性角度出发的二元评估问题,总共定义了十个精心设定的评估维度。总的来说,VideoVerse由300个精心挑选的任务指令组成,涉及815个事件和793个二元评估问题。 最终,通过使用现代视觉-语言模型,我们开发了一条与人类偏好一致的问题回答(QA)基准测试管道。最后,我们在VideoVerse上系统地评估了最先进的开源和闭源T2V模型,并对当前的T2V生成器距离世界模型有多远进行了深入分析。
URL
https://arxiv.org/abs/2510.08398