Paper Reading AI Learner

Voyager: Long-Range and World-Consistent Video Diffusion for Explorable 3D Scene Generation

2025-06-04 17:59:04
Tianyu Huang, Wangguandong Zheng, Tengfei Wang, Yuhao Liu, Zhenwei Wang, Junta Wu, Jie Jiang, Hui Li, Rynson W. H. Lau, Wangmeng Zuo, Chunchao Guo

Abstract

Real-world applications like video gaming and virtual reality often demand the ability to model 3D scenes that users can explore along custom camera trajectories. While significant progress has been made in generating 3D objects from text or images, creating long-range, 3D-consistent, explorable 3D scenes remains a complex and challenging problem. In this work, we present Voyager, a novel video diffusion framework that generates world-consistent 3D point-cloud sequences from a single image with user-defined camera path. Unlike existing approaches, Voyager achieves end-to-end scene generation and reconstruction with inherent consistency across frames, eliminating the need for 3D reconstruction pipelines (e.g., structure-from-motion or multi-view stereo). Our method integrates three key components: 1) World-Consistent Video Diffusion: A unified architecture that jointly generates aligned RGB and depth video sequences, conditioned on existing world observation to ensure global coherence 2) Long-Range World Exploration: An efficient world cache with point culling and an auto-regressive inference with smooth video sampling for iterative scene extension with context-aware consistency, and 3) Scalable Data Engine: A video reconstruction pipeline that automates camera pose estimation and metric depth prediction for arbitrary videos, enabling large-scale, diverse training data curation without manual 3D annotations. Collectively, these designs result in a clear improvement over existing methods in visual quality and geometric accuracy, with versatile applications.

Abstract (translated)

现实世界的应用,如视频游戏和虚拟现实,经常需要能够根据用户的自定义相机轨迹来模拟可以探索的三维场景。虽然从文本或图像生成3D物体已取得显著进展,但创建长距离、一致且可探索的3D场景仍然是一个复杂且具有挑战性的问题。在这项工作中,我们介绍了Voyager,这是一种新颖的视频扩散框架,可以从单张图片和用户定义的相机路径中生成世界一致性3D点云序列。与现有方法不同,Voyager实现了端到端的场景生成与重建,并在整个帧之间保持固有的连贯性,从而消除了对三维重建管线(例如运动结构或多视图立体匹配)的需求。 我们的方法集成了三个关键组件: 1. **世界一致性视频扩散**:一种统一架构,在给定现有世界观察的基础上联合生成一致的RGB和深度视频序列,以确保全局连贯性。 2. **长距离世界探索**:一个高效的世界缓存系统与点剔除结合,并通过平滑视频采样实现了自回归推理,能够进行上下文感知的一致迭代场景扩展。 3. **可扩展数据引擎**:一种视频重建管线,自动估计相机姿态并预测任意视频的度量深度,使大规模、多样化的训练数据整理成为可能,而无需手动标注三维信息。 通过这些设计,Voyager在视觉质量和几何精度上相对于现有方法有了明显的改进,并具有广泛的应用潜力。

URL

https://arxiv.org/abs/2506.04225

PDF

https://arxiv.org/pdf/2506.04225.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot