Abstract
Gaze is an essential prompt for analyzing human behavior and attention. Recently, there has been an increasing interest in determining gaze direction from facial videos. However, video gaze estimation faces significant challenges, such as understanding the dynamic evolution of gaze in video sequences, dealing with static backgrounds, and adapting to variations in illumination. To address these challenges, we propose a simple and novel deep learning model designed to estimate gaze from videos, incorporating a specialized attention module. Our method employs a spatial attention mechanism that tracks spatial dynamics within videos. This technique enables accurate gaze direction prediction through a temporal sequence model, adeptly transforming spatial observations into temporal insights, thereby significantly improving gaze estimation accuracy. Additionally, our approach integrates Gaussian processes to include individual-specific traits, facilitating the personalization of our model with just a few labeled samples. Experimental results confirm the efficacy of the proposed approach, demonstrating its success in both within-dataset and cross-dataset settings. Specifically, our proposed approach achieves state-of-the-art performance on the Gaze360 dataset, improving by $2.5^\circ$ without personalization. Further, by personalizing the model with just three samples, we achieved an additional improvement of $0.8^\circ$. The code and pre-trained models are available at \url{this https URL}.
Abstract (translated)
凝视是人类行为和注意力的关键提示。最近,越来越多地关注从视频面部视频中确定凝视方向。然而,视频凝视估计面临着重大挑战,例如理解视频序列中凝视的动态演变,处理静态背景,以及适应光照变化。为了应对这些挑战,我们提出了一个简单而新颖的深度学习模型,旨在从视频中估计凝视,并包括一个专门的注意力模块。我们的方法采用了一种空间注意力机制,跟踪视频中的空间动态。这种技术通过时间序列模型准确预测凝视方向,从而显著提高了凝视估计精度。此外,我们的方法结合高斯过程,包括个人特征,从而通过仅几个标记样本实现模型的个性化。实验结果证实了所提出的方法的成效,表明其在数据集内和跨数据集设置中都取得了成功。具体来说,我们在Gaze360数据集上实现了最先进的性能,通过个性化模型提高了2.5度。此外,通过仅使用三个样本,我们实现了额外的0.8度改进。代码和预训练模型可在此处访问:\url{这个链接}。
URL
https://arxiv.org/abs/2404.05215