Abstract
We introduce the task of arbitrary spatio-temporal video completion, where a video is generated from arbitrary, user-specified patches placed at any spatial location and timestamp, akin to painting on a video canvas. This flexible formulation naturally unifies many existing controllable video generation tasks--including first-frame image-to-video, inpainting, extension, and interpolation--under a single, cohesive paradigm. Realizing this vision, however, faces a fundamental obstacle in modern latent video diffusion models: the temporal ambiguity introduced by causal VAEs, where multiple pixel frames are compressed into a single latent representation, making precise frame-level conditioning structurally difficult. We address this challenge with VideoCanvas, a novel framework that adapts the In-Context Conditioning (ICC) paradigm to this fine-grained control task with zero new parameters. We propose a hybrid conditioning strategy that decouples spatial and temporal control: spatial placement is handled via zero-padding, while temporal alignment is achieved through Temporal RoPE Interpolation, which assigns each condition a continuous fractional position within the latent sequence. This resolves the VAE's temporal ambiguity and enables pixel-frame-aware control on a frozen backbone. To evaluate this new capability, we develop VideoCanvasBench, the first benchmark for arbitrary spatio-temporal video completion, covering both intra-scene fidelity and inter-scene creativity. Experiments demonstrate that VideoCanvas significantly outperforms existing conditioning paradigms, establishing a new state of the art in flexible and unified video generation.
Abstract (translated)
我们介绍了任意时空视频补全任务,即从用户指定的在任何空间位置和时间戳放置的补丁生成视频,类似于在视频画布上作画。这种灵活的表述自然地统一了许多现有的可控视频生成任务——包括第一帧图像到视频、修复、扩展和插值——在一个单一且连贯的范式下。然而,实现这一愿景面临着现代潜在视频扩散模型的基本障碍:因果VAE(变分自编码器)引入的时间模糊性问题,在该问题中,多个像素帧被压缩成单个潜在表示,使得精确的帧级条件设置在结构上变得困难。 为了解决这个挑战,我们提出了VideoCanvas,这是一个新颖的框架,它将In-Context Conditioning (ICC)范式适应于这一细粒度控制任务,并且不引入任何新的参数。我们提出了一种混合条件策略,该策略分离了空间和时间控制:空间放置通过零填充处理,而时间对齐则通过Temporal RoPE Interpolation(时序RoPE插值)实现,为每个条件分配潜在序列中的连续分数位置。这解决了VAE的时间模糊性问题,并在冻结的骨干网络上实现了像素帧感知的控制。 为了评估这一新能力,我们开发了VideoCanvasBench,这是第一个用于任意时空视频补全的基准测试工具,涵盖了场景内的保真度和跨场景的创造性。实验表明,VideoCanvas显著优于现有的条件设置范式,在灵活且统一的视频生成方面确立了新的技术标准。
URL
https://arxiv.org/abs/2510.08555