Abstract
Recent breakthroughs in single-image 3D portrait reconstruction have enabled telepresence systems to stream 3D portrait videos from a single camera in real-time, potentially democratizing telepresence. However, per-frame 3D reconstruction exhibits temporal inconsistency and forgets the user's appearance. On the other hand, self-reenactment methods can render coherent 3D portraits by driving a personalized 3D prior, but fail to faithfully reconstruct the user's per-frame appearance (e.g., facial expressions and lighting). In this work, we recognize the need to maintain both coherent identity and dynamic per-frame appearance to enable the best possible realism. To this end, we propose a new fusion-based method that fuses a personalized 3D subject prior with per-frame information, producing temporally stable 3D videos with faithful reconstruction of the user's per-frame appearances. Trained only using synthetic data produced by an expression-conditioned 3D GAN, our encoder-based method achieves both state-of-the-art 3D reconstruction accuracy and temporal consistency on in-studio and in-the-wild datasets.
Abstract (translated)
近年来,在单图像3D人物重建方面的突破使得远程会诊系统能够实时从单个相机流式传输3D人物视频,这有可能使远程会诊民主化。然而,每帧3D重建展示出时间不一致性,并忘记用户的形象。另一方面,自演算法可以通过驱动个性化的3D先验来生成连贯的3D肖像,但它无法准确地重构用户的每帧外貌(例如,面部表情和照明)。在本文中,我们认识到需要维持连贯的身份和动态每帧外貌,以实现最佳的现实感。为此,我们提出了一个新的基于融合的方法,将个性化的3D主体先验与每帧信息相结合,产生具有忠实重构用户每帧外貌的temporally stable 3D视频。仅使用通过表情条件生成器的合成数据进行训练,我们的编码器基础方法在实验室和自然数据集上实现最佳的3D重建精度和时间一致性。
URL
https://arxiv.org/abs/2405.00794