Paper Reading AI Learner

Spatio-Temporal Attention and Gaussian Processes for Personalized Video Gaze Estimation

2024-04-08 06:07:32
Swati Jindal, Mohit Yadav, Roberto Manduchi

Abstract

Gaze is an essential prompt for analyzing human behavior and attention. Recently, there has been an increasing interest in determining gaze direction from facial videos. However, video gaze estimation faces significant challenges, such as understanding the dynamic evolution of gaze in video sequences, dealing with static backgrounds, and adapting to variations in illumination. To address these challenges, we propose a simple and novel deep learning model designed to estimate gaze from videos, incorporating a specialized attention module. Our method employs a spatial attention mechanism that tracks spatial dynamics within videos. This technique enables accurate gaze direction prediction through a temporal sequence model, adeptly transforming spatial observations into temporal insights, thereby significantly improving gaze estimation accuracy. Additionally, our approach integrates Gaussian processes to include individual-specific traits, facilitating the personalization of our model with just a few labeled samples. Experimental results confirm the efficacy of the proposed approach, demonstrating its success in both within-dataset and cross-dataset settings. Specifically, our proposed approach achieves state-of-the-art performance on the Gaze360 dataset, improving by $2.5^\circ$ without personalization. Further, by personalizing the model with just three samples, we achieved an additional improvement of $0.8^\circ$. The code and pre-trained models are available at \url{this https URL}.

Abstract (translated)

凝视是人类行为和注意力的关键提示。最近,越来越多地关注从视频面部视频中确定凝视方向。然而,视频凝视估计面临着重大挑战,例如理解视频序列中凝视的动态演变,处理静态背景,以及适应光照变化。为了应对这些挑战,我们提出了一个简单而新颖的深度学习模型,旨在从视频中估计凝视,并包括一个专门的注意力模块。我们的方法采用了一种空间注意力机制,跟踪视频中的空间动态。这种技术通过时间序列模型准确预测凝视方向,从而显著提高了凝视估计精度。此外,我们的方法结合高斯过程,包括个人特征,从而通过仅几个标记样本实现模型的个性化。实验结果证实了所提出的方法的成效,表明其在数据集内和跨数据集设置中都取得了成功。具体来说,我们在Gaze360数据集上实现了最先进的性能,通过个性化模型提高了2.5度。此外,通过仅使用三个样本,我们实现了额外的0.8度改进。代码和预训练模型可在此处访问:\url{这个链接}。

URL

https://arxiv.org/abs/2404.05215

PDF

https://arxiv.org/pdf/2404.05215.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot