Paper Reading AI Learner

VideoCanvas: Unified Video Completion from Arbitrary Spatiotemporal Patches via In-Context Conditioning

2025-10-09 17:58:59
Minghong Cai, Qiulin Wang, Zongli Ye, Wenze Liu, Quande Liu, Weicai Ye, Xintao Wang, Pengfei Wan, Kun Gai, Xiangyu Yue

Abstract

We introduce the task of arbitrary spatio-temporal video completion, where a video is generated from arbitrary, user-specified patches placed at any spatial location and timestamp, akin to painting on a video canvas. This flexible formulation naturally unifies many existing controllable video generation tasks--including first-frame image-to-video, inpainting, extension, and interpolation--under a single, cohesive paradigm. Realizing this vision, however, faces a fundamental obstacle in modern latent video diffusion models: the temporal ambiguity introduced by causal VAEs, where multiple pixel frames are compressed into a single latent representation, making precise frame-level conditioning structurally difficult. We address this challenge with VideoCanvas, a novel framework that adapts the In-Context Conditioning (ICC) paradigm to this fine-grained control task with zero new parameters. We propose a hybrid conditioning strategy that decouples spatial and temporal control: spatial placement is handled via zero-padding, while temporal alignment is achieved through Temporal RoPE Interpolation, which assigns each condition a continuous fractional position within the latent sequence. This resolves the VAE's temporal ambiguity and enables pixel-frame-aware control on a frozen backbone. To evaluate this new capability, we develop VideoCanvasBench, the first benchmark for arbitrary spatio-temporal video completion, covering both intra-scene fidelity and inter-scene creativity. Experiments demonstrate that VideoCanvas significantly outperforms existing conditioning paradigms, establishing a new state of the art in flexible and unified video generation.

Abstract (translated)

我们介绍了任意时空视频补全任务,即从用户指定的在任何空间位置和时间戳放置的补丁生成视频,类似于在视频画布上作画。这种灵活的表述自然地统一了许多现有的可控视频生成任务——包括第一帧图像到视频、修复、扩展和插值——在一个单一且连贯的范式下。然而,实现这一愿景面临着现代潜在视频扩散模型的基本障碍:因果VAE(变分自编码器)引入的时间模糊性问题,在该问题中,多个像素帧被压缩成单个潜在表示,使得精确的帧级条件设置在结构上变得困难。 为了解决这个挑战,我们提出了VideoCanvas,这是一个新颖的框架,它将In-Context Conditioning (ICC)范式适应于这一细粒度控制任务,并且不引入任何新的参数。我们提出了一种混合条件策略,该策略分离了空间和时间控制:空间放置通过零填充处理,而时间对齐则通过Temporal RoPE Interpolation(时序RoPE插值)实现,为每个条件分配潜在序列中的连续分数位置。这解决了VAE的时间模糊性问题,并在冻结的骨干网络上实现了像素帧感知的控制。 为了评估这一新能力,我们开发了VideoCanvasBench,这是第一个用于任意时空视频补全的基准测试工具,涵盖了场景内的保真度和跨场景的创造性。实验表明,VideoCanvas显著优于现有的条件设置范式,在灵活且统一的视频生成方面确立了新的技术标准。

URL

https://arxiv.org/abs/2510.08555

PDF

https://arxiv.org/pdf/2510.08555.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot