Abstract
We propose a method for text-driven perpetual view generation -- synthesizing long videos of arbitrary scenes solely from an input text describing the scene and camera poses. We introduce a novel framework that generates such videos in an online fashion by combining the generative power of a pre-trained text-to-image model with the geometric priors learned by a pre-trained monocular depth prediction model. To achieve 3D consistency, i.e., generating videos that depict geometrically-plausible scenes, we deploy an online test-time training to encourage the predicted depth map of the current frame to be geometrically consistent with the synthesized scene; the depth maps are used to construct a unified mesh representation of the scene, which is updated throughout the generation and is used for rendering. In contrast to previous works, which are applicable only for limited domains (e.g., landscapes), our framework generates diverse scenes, such as walkthroughs in spaceships, caves, or ice castles. Project page: this https URL
Abstract (translated)
我们提出了一种基于文本驱动的永久性视角生成方法——通过从输入文本描述的场景和相机姿态中生成长篇视频。我们介绍了一种新框架,该框架通过将预先训练的文本到图像生成模型的生成能力与预先训练的单眼深度预测模型的学习几何预条件相结合,以在线方式生成此类视频。为了实现3D一致性,即生成几何上可行的场景,我们采用了在线测试时间训练,以鼓励当前帧预测深度图与合成场景几何一致性;深度图用于构建场景的统一网格表示,在整个生成过程中进行更新,用于渲染。与以前的工作不同,这些工作仅适用于某些特定领域(例如风景),而我们的框架可以生成各种场景,例如在飞船、洞穴或冰城堡中的路径观察。项目页面:这个 https URL 链接。
URL
https://arxiv.org/abs/2302.01133