Paper Reading AI Learner

FR-Net:A Light-weight FFT Residual Net For Gaze Estimation

2023-05-04 12:49:07
Tao Xu, Bo Wu, Ruilong Fan, Yun Zhou, Di Huang

Abstract

Gaze estimation is a crucial task in computer vision, however, existing methods suffer from high computational costs, which limit their practical deployment in resource-limited environments. In this paper, we propose a novel lightweight model, FR-Net, for accurate gaze angle estimation while significantly reducing computational complexity. FR-Net utilizes the Fast Fourier Transform (FFT) to extract gaze-relevant features in frequency domains while reducing the number of parameters. Additionally, we introduce a shortcut component that focuses on the spatial domain to further improve the accuracy of our model. Our experimental results demonstrate that our approach achieves substantially lower gaze error angles (3.86 on MPII and 4.51 on EYEDIAP) compared to state-of-the-art gaze estimation methods, while utilizing 17 times fewer parameters (0.67M) and only 12\% of FLOPs (0.22B). Furthermore, our method outperforms existing lightweight methods in terms of accuracy and efficiency for the gaze estimation task. These results suggest that our proposed approach has significant potential applications in areas such as human-computer interaction and driver assistance systems.

Abstract (translated)

视觉中的注视估算是一个关键的任务,但现有的方法面临着高计算成本的限制,这限制了它们在资源有限的环境中的实际应用。在本文中,我们提出了一种新颖的轻量化模型,FR-Net,以高精度的注视角度估算而显著降低计算复杂度。FR-Net利用快速傅里叶变换(FFT)在频率域中提取注视相关特征,同时减少参数数量。此外,我们引入了一个捷径组件,重点聚焦于空间域,进一步提高了我们的模型的准确性。我们的实验结果显示,与我们最先进的注视估算方法相比,我们的方法能够实现显著的更低的注视误差角度(MPII为3.86,eyeDIAP为4.51),同时使用更少的参数(0.67M)和只有12\%的FLOPs(0.22B)。此外,我们的方法在注视估算任务的准确性和效率方面都超越了现有的轻量化方法。这些结果表明,我们提出的这种方法在人机交互和自动驾驶系统等领域具有巨大的潜在应用。

URL

https://arxiv.org/abs/2305.11875

PDF

https://arxiv.org/pdf/2305.11875.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot