Abstract
In this research, we present SLYKLatent, a novel approach for enhancing gaze estimation by addressing appearance instability challenges in datasets due to aleatoric uncertainties, covariant shifts, and test domain generalization. SLYKLatent utilizes Self-Supervised Learning for initial training with facial expression datasets, followed by refinement with a patch-based tri-branch network and an inverse explained variance-weighted training loss function. Our evaluation on benchmark datasets achieves an 8.7% improvement on Gaze360, rivals top MPIIFaceGaze results, and leads on a subset of ETH-XGaze by 13%, surpassing existing methods by significant margins. Adaptability tests on RAF-DB and Affectnet show 86.4% and 60.9% accuracies, respectively. Ablation studies confirm the effectiveness of SLYKLatent's novel components. This approach has strong potential in human-robot interaction.
Abstract (translated)
在这项研究中,我们提出了SLYKLatent,一种通过解决数据集中由于随机不确定性、协变量位移和测试域泛化导致的特征不稳定问题来增强注视估计的新型方法。SLYKLatent利用自监督学习对面部表情数据集进行初始训练,然后通过基于补丁的三分支网络和逆均方误差损失函数进行优化。我们在基准数据集上的评估实现了Gaze360的8.7%的改进,与顶级MPIIFaceGaze的结果相当,并且在ETH-XGaze上的份额增加了13%,超过了现有方法。在RAF-DB和Affectnet上的适应性测试显示,SLYKLatent的新组件分别获得了86.4%和60.9%的准确度。消融研究证实了SLYKLatent新组件的有效性。这种方法在人类机器人交互中具有很强的潜力。
URL
https://arxiv.org/abs/2402.01555