Abstract
Understanding action recognition in egocentric videos has emerged as a vital research topic with numerous practical applications. With the limitation in the scale of egocentric data collection, learning robust deep learning-based action recognition models remains difficult. Transferring knowledge learned from the large-scale exocentric data to the egocentric data is challenging due to the difference in videos across views. Our work introduces a novel cross-view learning approach to action recognition (CVAR) that effectively transfers knowledge from the exocentric to the egocentric view. First, we introduce a novel geometric-based constraint into the self-attention mechanism in Transformer based on analyzing the camera positions between two views. Then, we propose a new cross-view self-attention loss learned on unpaired cross-view data to enforce the self-attention mechanism learning to transfer knowledge across views. Finally, to further improve the performance of our cross-view learning approach, we present the metrics to measure the correlations in videos and attention maps effectively. Experimental results on standard egocentric action recognition benchmarks, i.e., Charades-Ego, EPIC-Kitchens-55, and EPIC-Kitchens-100, have shown our approach's effectiveness and state-of-the-art performance.
Abstract (translated)
理解个人视角视频的行动识别已经发展成为一个重要的实用研究主题,具有大量的实际应用。由于个人视角数据采集的规模限制,学习可靠的深度学习行动识别模型仍然非常困难。从大型外部视角数据到个人视角数据的学习知识转移面临着由于不同视角视频差异的挑战。我们的工作提出了一种新的跨视角学习方法(CVAR),能够有效地将外部视角知识转移到个人视角视角。首先,我们引入一种新的几何约束,在Transformer中引入自注意力机制,基于分析两个视角之间的摄像机位置。然后,我们提出了一种新的跨视角自注意力损失,在配对跨视角数据上训练,以强制自注意力机制学习跨视角知识转移。最后,为了进一步提高我们的跨视角学习方法的性能,我们提出了指标,以有效地测量视频和注意力地图之间的相关性。标准个人视角行动识别基准测试数据,如Charades-Ego、Epic厨房55和Epic厨房100,已经证明了我们的方法的有效性和最先进的性能。
URL
https://arxiv.org/abs/2305.15699