Abstract
Eye-tracking applications that utilize the human gaze in video understanding tasks have become increasingly important. To effectively automate the process of video analysis based on eye-tracking data, it is important to accurately replicate human gaze behavior. However, this task presents significant challenges due to the inherent complexity and ambiguity of human gaze patterns. In this work, we introduce a novel method for simulating human gaze behavior. Our approach uses a transformer-based reinforcement learning algorithm to train an agent that acts as a human observer, with the primary role of watching videos and simulating human gaze behavior. We employed an eye-tracking dataset gathered from videos generated by the VirtualHome simulator, with a primary focus on activity recognition. Our experimental results demonstrate the effectiveness of our gaze prediction method by highlighting its capability to replicate human gaze behavior and its applicability for downstream tasks where real human-gaze is used as input.
Abstract (translated)
利用人类视线的视频理解任务中翻译眼动应用越来越重要。要有效自动化基于眼动数据的视频分析过程,准确复制人类视线的 behaviors 至关重要。然而,由于人类视图模式固有的复杂性和不确定性,这项任务带来了巨大的挑战。在这项工作中,我们提出了一种模拟人类视线行为的新方法。我们的方法使用基于Transformer的强化学习算法来训练一个观察者型代理,主要职责是观看视频并模拟人类视线行为。我们使用了由VirtualHome模拟器生成的视频数据,重点是活动识别。我们的实验结果表明,通过强调其复制人类视线行为的能力和适用于下游任务(使用真实人类视线作为输入)的可行性,我们的人眼预测方法的有效性得到了充分证明。
URL
https://arxiv.org/abs/2404.07351