Abstract
Collecting accurate camera poses of training images has been shown to well serve the learning of 3D-aware generative adversarial networks (GANs) yet can be quite expensive in practice. This work targets learning 3D-aware GANs from unposed images, for which we propose to perform on-the-fly pose estimation of training images with a learned template feature field (TeFF). Concretely, in addition to a generative radiance field as in previous approaches, we ask the generator to also learn a field from 2D semantic features while sharing the density from the radiance field. Such a framework allows us to acquire a canonical 3D feature template leveraging the dataset mean discovered by the generative model, and further efficiently estimate the pose parameters on real data. Experimental results on various challenging datasets demonstrate the superiority of our approach over state-of-the-art alternatives from both the qualitative and the quantitative perspectives.
Abstract (translated)
收集训练图像的准确相机姿态已被证明对于学习3D感知生成对抗网络(GANs)来说非常有益,然而在实践中,这可能相当昂贵。本文针对从未经过姿态估计的图像中学习3D感知GANs,我们提出了一种通过学习到的模板特征场(TeFF)在训练图像上进行动态姿态估计的方法。具体来说,我们在生成器中除了具有之前方法中的生成 Radiance 场之外,还要求生成器学习一个2D语义特征场,并在共享 Radiance 场的密度。这样的框架允许我们在利用生成模型发现的集均值数据的同时,获得一个规范的3D特征模板,并进一步高效地在真实数据上估计姿态参数。在各种具有挑战性的数据集上的实验结果表明,我们的方法在质量和数量方面都优于最先进的替代方案。
URL
https://arxiv.org/abs/2404.05705