Abstract
Appearance-based gaze estimation has been actively studied in recent years. However, its generalization performance for unseen head poses is still a significant limitation for existing methods. This work proposes a generalizable multi-view gaze estimation task and a cross-view feature fusion method to address this issue. In addition to paired images, our method takes the relative rotation matrix between two cameras as additional input. The proposed network learns to extract rotatable feature representation by using relative rotation as a constraint and adaptively fuses the rotatable features via stacked fusion modules. This simple yet efficient approach significantly improves generalization performance under unseen head poses without significantly increasing computational cost. The model can be trained with random combinations of cameras without fixing the positioning and can generalize to unseen camera pairs during inference. Through experiments using multiple datasets, we demonstrate the advantage of the proposed method over baseline methods, including state-of-the-art domain generalization approaches.
Abstract (translated)
外观基于 gaze 估计在近年来得到了 actively 的研究,但是对于未知的头部姿势 generalization 性能仍然是现有方法的一个 significant 限制。该工作提出了一个可扩展的多角度 gaze 估计任务和交叉视角特征融合方法来解决这个问题。除了配对图像,我们的方法还使用两个相机之间的相对旋转矩阵作为额外的输入。 proposed 网络通过学习使用相对旋转作为约束来提取可旋转的特征表示,并通过堆叠融合模块自适应地融合这些可旋转的特征。这个简单但高效的方法在未知的头部姿势下显著改善了 generalization 性能,而无需显著增加计算成本。模型可以使用随机组合的相机组合进行训练,而可以在推理期间 generalization 到未知的相机对。通过使用多个数据集进行实验,我们证明了该方法相对于基准方法的优势,包括先进的领域扩展方法。
URL
https://arxiv.org/abs/2305.12704