Abstract
Appearance-based gaze estimation has shown great promise in many applications by using a single general-purpose camera as the input device. However, its success is highly depending on the availability of large-scale well-annotated gaze datasets, which are sparse and expensive to collect. To alleviate this challenge we propose ConGaze, a contrastive learning-based framework that leverages unlabeled facial images to learn generic gaze-aware representations across subjects in an unsupervised way. Specifically, we introduce the gaze-specific data augmentation to preserve the gaze-semantic features and maintain the gaze consistency, which are proven to be crucial for effective contrastive gaze representation learning. Moreover, we devise a novel subject-conditional projection module that encourages a share feature extractor to learn gaze-aware and generic representations. Our experiments on three public gaze estimation datasets show that ConGaze outperforms existing unsupervised learning solutions by 6.7% to 22.5%; and achieves 15.1% to 24.6% improvement over its supervised learning-based counterpart in cross-dataset evaluations.
Abstract (translated)
基于外观的 gaze 估计在许多应用中表现出巨大的潜力,只需要使用一个通用的摄像头作为输入设备。然而,它的成功高度取决于大规模 well-annotated gaze 数据集的可用性,这些数据集非常稀疏且昂贵地收集。为了减轻这个挑战,我们提出了 ConGaze,一个对比学习为基础的框架,利用未标记的面部图像来在没有监督的情况下学习适用于不同对象的通用 gaze aware 表示。具体来说,我们引入了 gaze 特定的数据增强来保持 gaze 语义特征并维持 gaze 一致性,这些特性是有效的对比学习 gaze 表示学习的关键。此外,我们设计了一个新型的主题条件投影模块,鼓励共享特征提取器来学习 gaze aware 和通用表示。我们对三个公共 gaze 估计数据集的实验表明,ConGaze 在与其他数据集评估相比中表现优异,提高了6.7%到22.5%。
URL
https://arxiv.org/abs/2309.04506