Paper Reading AI Learner

VDAWorld: World Modelling via VLM-Directed Abstraction and Simulation

2025-12-11 19:21:47
Felix O'Mahony, Roberto Cipolla, Ayush Tewari

Abstract

Generative video models, a leading approach to world modeling, face fundamental limitations. They often violate physical and logical rules, lack interactivity, and operate as opaque black boxes ill-suited for building structured, queryable worlds. To overcome these challenges, we propose a new paradigm focused on distilling an image caption pair into a tractable, abstract representation optimized for simulation. We introduce VDAWorld, a framework where a Vision-Language Model (VLM) acts as an intelligent agent to orchestrate this process. The VLM autonomously constructs a grounded (2D or 3D) scene representation by selecting from a suite of vision tools, and accordingly chooses a compatible physics simulator (e.g., rigid body, fluid) to act upon it. VDAWorld can then infer latent dynamics from the static scene to predict plausible future states. Our experiments show that this combination of intelligent abstraction and adaptive simulation results in a versatile world model capable of producing high quality simulations across a wide range of dynamic scenarios.

Abstract (translated)

生成视频模型是世界建模的一种前沿方法,但它们面临着基本的局限性:常常违反物理和逻辑规则,缺乏互动性,并且作为不透明的黑盒子运行,不适合构建结构化、可查询的世界。为了解决这些挑战,我们提出了一种新的范式,专注于将图像-描述对提炼成一种易于理解和抽象化的表示形式,以优化模拟过程。为此,我们介绍了VDAWorld框架,在该框架中,视觉-语言模型(VLM)作为智能代理来协调这一过程。VLM自主地从一系列视觉工具中选择并构建一个基于场景的(2D或3D)场景表示,并相应地选择合适的物理仿真器(例如刚体、流体等)对这些场景进行操作。通过这种方法,VDAWorld可以从静态场景中推断出潜在的动力学来预测可能的未来状态。 实验表明,这种智能抽象与自适应模拟相结合的方式可以创建一个多功能的世界模型,在各种动态情景下都能生成高质量的仿真结果。

URL

https://arxiv.org/abs/2512.11061

PDF

https://arxiv.org/pdf/2512.11061.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot