Abstract
Temporal Action Localization (TAL) is a challenging task in video understanding that aims to identify and localize actions within a video sequence. Recent studies have emphasized the importance of applying long-term temporal context modeling (TCM) blocks to the extracted video clip features such as employing complex self-attention mechanisms. In this paper, we present the simplest method ever to address this task and argue that the extracted video clip features are already informative to achieve outstanding performance without sophisticated architectures. To this end, we introduce TemporalMaxer, which minimizes long-term temporal context modeling while maximizing information from the extracted video clip features with a basic, parameter-free, and local region operating max-pooling block. Picking out only the most critical information for adjacent and local clip embeddings, this block results in a more efficient TAL model. We demonstrate that TemporalMaxer outperforms other state-of-the-art methods that utilize long-term TCM such as self-attention on various TAL datasets while requiring significantly fewer parameters and computational resources. The code for our approach is publicly available at this https URL
Abstract (translated)
时间动作定位(TAL)是在视频理解中具有挑战性的任务,旨在识别和定位视频序列中的行动。最近的研究表明,应用长期时间上下文建模(TCM)块提取提取的视频片段特征,如使用复杂的自注意力机制非常重要。在本文中,我们介绍了解决此任务最简单的方法,并认为提取的视频片段特征已经具有信息,在没有 sophisticated 架构的情况下,以出色的性能为目标。为此,我们介绍了TemporalMaxer,该方法最小化长期时间上下文建模,同时最大化从提取的视频片段特征中获取的信息,使用基本、参数免费的局部区域最大池化块。仅选择相邻和局部片段嵌入的最重要信息,该块生成更高效的TAL模型。我们证明了TemporalMaxer在多个TAL数据集上优于其他利用长期TCM的先进技术,如自注意力,在各种TAL数据集上表现优异,同时只需要较少的参数和计算资源。我们的算法代码在此https URL上公开可用。
URL
https://arxiv.org/abs/2303.09055