Abstract
This report presents our team's 'PCIE_LAM' solution for the Ego4D Looking At Me Challenge at CVPR2024. The main goal of the challenge is to accurately determine if a person in the scene is looking at the camera wearer, based on a video where the faces of social partners have been localized. Our proposed solution, InternLSTM, consists of an InternVL image encoder and a Bi-LSTM network. The InternVL extracts spatial features, while the Bi-LSTM extracts temporal features. However, this task is highly challenging due to the distance between the person in the scene and the camera movement, which results in significant blurring in the face image. To address the complexity of the task, we implemented a Gaze Smoothing filter to eliminate noise or spikes from the output. Our approach achieved the 1st position in the looking at me challenge with 0.81 mAP and 0.93 accuracy rate. Code is available at this https URL
Abstract (translated)
此报告展示了我们团队在2024年CVPR中为Ego4D Looking At Me挑战提出的'PCIE_LAM'解决方案。挑战的主要目标是准确确定场景中的人是否正在看着我们佩戴的相机,基于一个已经对社交伙伴的 face 进行局部化的视频。我们提出的解决方案包括一个InternVL图像编码器和一个Bi-LSTM网络。InternVL提取空间特征,而Bi-LSTM提取时间特征。然而,由于场景中的人和相机移动之间的距离,这项任务非常具有挑战性,导致脸部图像中出现明显模糊。为了解决任务的复杂性,我们实现了一个Gaze Smoothing滤波器,以消除输出中的噪音或尖峰。我们的方法在Looking at Me挑战中获得了0.81mAP和0.93的准确率,位居第一。代码可在此处访问:<https://www.example.com>
URL
https://arxiv.org/abs/2406.12211