Abstract
We tackle the problem of monocular-to-stereo video conversion and propose a novel architecture for inpainting and refinement of the warped right view obtained by depth-based reprojection of the input left view. We extend the Stable Video Diffusion (SVD) model to utilize the input left video, the warped right video, and the disocclusion masks as conditioning input to generate a high-quality right camera view. In order to effectively exploit information from neighboring frames for inpainting, we modify the attention layers in SVD to compute full attention for discoccluded pixels. Our model is trained to generate the right view video in an end-to-end manner by minimizing image space losses to ensure high-quality generation. Our approach outperforms previous state-of-the-art methods, obtaining an average rank of 1.43 among the 4 compared methods in a user study, while being 6x faster than the second placed method.
Abstract (translated)
我们解决了单目到立体视频转换的问题,并提出了一种新颖的架构,用于对通过基于深度的重投影从输入左视图获得的扭曲右视图进行修复和细化。我们将Stable Video Diffusion (SVD) 模型扩展为利用输入左视视频、扭曲后的右视图以及遮挡排除掩码作为条件输入来生成高质量的右摄像机视图。为了有效利用相邻帧的信息来进行修补,我们修改了SVD中的注意力层以对被遮挡像素计算完全注意(full attention)。我们的模型通过最小化图像空间损失,在端到端的方式下训练以生成高质量的右视视频。在用户研究中,与四个比较方法相比,我们的方法平均排名为1.43,同时比第二名的方法快6倍。
URL
https://arxiv.org/abs/2505.16565