Abstract
Despite the recent remarkable achievement in gaze estimation, efficient and accurate personalization of gaze estimation without labels is a practical problem but rarely touched on in the literature. To achieve efficient personalization, we take inspiration from the recent advances in Natural Language Processing (NLP) by updating a negligible number of parameters, "prompts", at the test time. Specifically, the prompt is additionally attached without perturbing original network and can contain less than 1% of a ResNet-18's parameters. Our experiments show high efficiency of the prompt tuning approach. The proposed one can be 10 times faster in terms of adaptation speed than the methods compared. However, it is non-trivial to update the prompt for personalized gaze estimation without labels. At the test time, it is essential to ensure that the minimizing of particular unsupervised loss leads to the goals of minimizing gaze estimation error. To address this difficulty, we propose to meta-learn the prompt to ensure that its updates align with the goal. Our experiments show that the meta-learned prompt can be effectively adapted even with a simple symmetry loss. In addition, we experiment on four cross-dataset validations to show the remarkable advantages of the proposed method.
Abstract (translated)
尽管在 gaze 估计方面取得了最近引人注目的成就,但 gaze 估计没有标签的准确且高效个性化仍然是一个实际问题,但很少在文献中涉及。为了实现高效的个性化,我们受到了自然语言处理(NLP)领域最近取得的进展的启发,在测试时更新了极少量的参数,“提示”(prompts)。具体来说,我们还在原始网络之外附加了提示,且不会对 ResNet-18 的参数造成较大影响。我们的实验表明,提示调整方法具有很高的效率。与比较方法相比,所提出的个性化 gaze 估计方法可以快 10 倍。然而,为个性化 gaze 估计没有标签,更新提示并不容易。在测试时,确保最小化特定无监督损失导致 gaze 估计误差最小化目标是至关重要的。为了应对这一困难,我们提出了一种元学习提示的方法,以确保其更新符合目标。我们的实验表明,即使使用简单的对称损失,元学习提示也可以有效地适应。此外,我们还对四个跨数据集的实验进行了研究,以展示所提出方法的优势。
URL
https://arxiv.org/abs/2401.01577