Abstract
Accurate face parsing under extreme viewing angles remains a significant challenge due to limited labeled data in such poses. Manual annotation is costly and often impractical at scale. We propose a novel label refinement pipeline that leverages 3D Gaussian Splatting (3DGS) to generate accurate segmentation masks from noisy multiview predictions. By jointly fitting two 3DGS models, one to RGB images and one to their initial segmentation maps, our method enforces multiview consistency through shared geometry, enabling the synthesis of pose-diverse training data with only minimal post-processing. Fine-tuning a face parsing model on this refined dataset significantly improves accuracy on challenging head poses, while maintaining strong performance on standard views. Extensive experiments, including human evaluations, demonstrate that our approach achieves superior results compared to state-of-the-art methods, despite requiring no ground-truth 3D annotations and using only a small set of initial images. Our method offers a scalable and effective solution for improving face parsing robustness in real- world settings.
Abstract (translated)
在极端视角下进行精确的脸部解析仍然面临着重大挑战,这主要是由于此类姿势下的标注数据有限。手动注释既费时又成本高昂,在大规模应用中往往不可行。我们提出了一种新颖的标签优化流水线,该流程利用3D高斯点阵(3DGS)技术从多视角预测中生成准确的分割掩码,即使面对噪声输入也能保证输出质量。 通过同时拟合两个基于3DGS的模型——一个针对RGB图像,另一个针对初始分割图——我们的方法能够确保多视角的一致性,并且通过共享几何结构来合成不同姿态的训练数据。仅需进行少量后处理工作即可实现这一点。在使用这种方法优化的数据集上对脸部解析模型进行微调后,在具有挑战性的头部姿势下精度有了显著提高,同时标准视图下的性能依然保持强劲。 大量的实验测试——包括人类评价——证明了我们的方法在与当前最先进方法的比较中取得了卓越的结果,并且无需基于3D的真实标签,仅使用少量初始图像即可实现这一点。我们的方法提供了一种可扩展且有效的解决方案,以增强实际环境中脸部解析的鲁棒性。
URL
https://arxiv.org/abs/2510.08096