Abstract
Inter-personal anatomical differences limit the accuracy of person-independent gaze estimation networks. Yet there is a need to lower gaze errors further to enable applications requiring higher quality. Further gains can be achieved by personalizing gaze networks, ideally with few calibration samples. However, over-parameterized neural networks are not amenable to learning from few examples as they can quickly over-fit. We embrace these challenges and propose a novel framework for Few-shot Adaptive GaZE Estimation (FAZE) for learning person-specific gaze networks with very few (less than 9) calibration samples. FAZE learns a rotation-aware latent representation of gaze via a disentangling encoder-decoder architecture along with a highly adaptable gaze estimator trained using meta-learning. It is capable of adapting to any new person to yield significant performance gains with as few as 3 samples, yielding state-of-the-art performance of 3.18-deg on GazeCapture, a 19% improvement over prior art.
Abstract (translated)
个体间的解剖差异限制了个体独立注视估计网络的准确性。然而,有必要进一步降低注视误差,以使应用程序具有更高的质量。通过个性化的凝视网络(理想情况下只有很少的校准样本)可以获得进一步的收益。然而,过度参数化的神经网络不适合从少数例子中学习,因为它们可以快速地过拟合。我们接受这些挑战,并提出了一个新的框架,用于学习很少(少于9个)校准样本的特定于人的注视网络。Faze通过分离编码器-解码器体系结构以及使用元学习训练的高度适应性注视估计器来学习感知旋转的潜在注视表示。它能够适应任何新的人,在仅有3个样本的情况下产生显著的性能提升,在GazeCapture上产生3.18度的最新性能,比现有技术提高19%。
URL
https://arxiv.org/abs/1905.01941