Paper Reading AI Learner

Appearance-Based Gaze Estimation Using Dilated-Convolutions

2019-03-18 08:29:32
Zhaokang Chen, Bertram E. Shi

Abstract

Appearance-based gaze estimation has attracted more and more attention because of its wide range of applications. The use of deep convolutional neural networks has improved the accuracy significantly. In order to improve the estimation accuracy further, we focus on extracting better features from eye images. Relatively large changes in gaze angles may result in relatively small changes in eye appearance. We argue that current architectures for gaze estimation may not be able to capture such small changes, as they apply multiple pooling layers or other downsampling layers so that the spatial resolution of the high-level layers is reduced significantly. To evaluate whether the use of features extracted at high resolution can benefit gaze estimation, we adopt dilated-convolutions to extract high-level features without reducing spatial resolution. In cross-subject experiments on the Columbia Gaze dataset for eye contact detection and the MPIIGaze dataset for 3D gaze vector regression, the resulting Dilated-Nets achieve significant (up to 20.8%) gains when compared to similar networks without dilated-convolutions. Our proposed Dilated-Net achieves state-of-the-art results on both the Columbia Gaze and the MPIIGaze datasets.

Abstract (translated)

基于外观的注视估计因其广泛的应用而受到越来越多的关注。深卷积神经网络的使用大大提高了精度。为了进一步提高人眼图像的估计精度,我们着重于从人眼图像中提取更好的特征。相对较大的注视角度变化可能导致眼睛外观相对较小的变化。我们认为,当前的凝视估计体系结构可能无法捕获如此小的变化,因为它们应用了多个池层或其他降采样层,从而显著降低了高层的空间分辨率。为了评估在高分辨率下提取的特征是否有利于凝视估计,我们采用了扩张卷积方法在不降低空间分辨率的情况下提取高水平特征。在哥伦比亚注视数据集(用于眼睛接触检测)和MPiGaze数据集(用于3D注视向量回归)的跨主题实验中,与无扩张卷积的类似网络相比,结果扩张的网络获得显著(高达20.8%)的收益。我们提出的扩展网络在哥伦比亚凝视和MPiGaze数据集上都取得了最先进的结果。

URL

https://arxiv.org/abs/1903.07296

PDF

https://arxiv.org/pdf/1903.07296.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot