Abstract
Generative video models, a leading approach to world modeling, face fundamental limitations. They often violate physical and logical rules, lack interactivity, and operate as opaque black boxes ill-suited for building structured, queryable worlds. To overcome these challenges, we propose a new paradigm focused on distilling an image caption pair into a tractable, abstract representation optimized for simulation. We introduce VDAWorld, a framework where a Vision-Language Model (VLM) acts as an intelligent agent to orchestrate this process. The VLM autonomously constructs a grounded (2D or 3D) scene representation by selecting from a suite of vision tools, and accordingly chooses a compatible physics simulator (e.g., rigid body, fluid) to act upon it. VDAWorld can then infer latent dynamics from the static scene to predict plausible future states. Our experiments show that this combination of intelligent abstraction and adaptive simulation results in a versatile world model capable of producing high quality simulations across a wide range of dynamic scenarios.
Abstract (translated)
生成视频模型是世界建模的一种前沿方法,但它们面临着基本的局限性:常常违反物理和逻辑规则,缺乏互动性,并且作为不透明的黑盒子运行,不适合构建结构化、可查询的世界。为了解决这些挑战,我们提出了一种新的范式,专注于将图像-描述对提炼成一种易于理解和抽象化的表示形式,以优化模拟过程。为此,我们介绍了VDAWorld框架,在该框架中,视觉-语言模型(VLM)作为智能代理来协调这一过程。VLM自主地从一系列视觉工具中选择并构建一个基于场景的(2D或3D)场景表示,并相应地选择合适的物理仿真器(例如刚体、流体等)对这些场景进行操作。通过这种方法,VDAWorld可以从静态场景中推断出潜在的动力学来预测可能的未来状态。 实验表明,这种智能抽象与自适应模拟相结合的方式可以创建一个多功能的世界模型,在各种动态情景下都能生成高质量的仿真结果。
URL
https://arxiv.org/abs/2512.11061