Abstract
Motion representation plays an important role in video understanding and has many applications including action recognition, robot and autonomous guidance or others. Lately, transformer networks, through their self-attention mechanism capabilities, have proved their efficiency in many applications. In this study, we introduce a new two-stream transformer video classifier, which extracts spatio-temporal information from content and optical flow representing movement information. The proposed model identifies self-attention features across the joint optical flow and temporal frame domain and represents their relationships within the transformer encoder mechanism. The experimental results show that our proposed methodology provides excellent classification results on three well-known video datasets of human activities.
Abstract (translated)
运动表示在视频理解中扮演着重要角色,并且有许多应用,包括动作识别、机器人和自主导航等。最近,通过自注意力机制的能力,变压器网络在许多应用程序中证明了其有效性。在这项研究中,我们引入了一种新的双流变压器视频分类器,该分类器从内容和表示运动信息的光学流中提取时空信息。所提出的模型在联合光流和时间帧域内识别自注意特征,并通过变压器编码机制表示它们之间的关系。实验结果表明,在三个著名的涉及人类活动的视频数据集上,我们提出的方法提供了出色的分类效果。
URL
https://arxiv.org/abs/2601.14086