Abstract
Temporal Action Localization (TAL) aims to identify actions' start, end, and class labels in untrimmed videos. While recent advancements using transformer networks and Feature Pyramid Networks (FPN) have enhanced visual feature recognition in TAL tasks, less progress has been made in the integration of audio features into such frameworks. This paper introduces the Multi-Resolution Audio-Visual Feature Fusion (MRAV-FF), an innovative method to merge audio-visual data across different temporal resolutions. Central to our approach is a hierarchical gated cross-attention mechanism, which discerningly weighs the importance of audio information at diverse temporal scales. Such a technique not only refines the precision of regression boundaries but also bolsters classification confidence. Importantly, MRAV-FF is versatile, making it compatible with existing FPN TAL architectures and offering a significant enhancement in performance when audio data is available.
Abstract (translated)
时间动作定位(TAL)的目标是在未剪辑的视频中提取动作的开始、结束和类别标签。尽管使用Transformer网络和特征层次网络(FPN)的最新研究已经增强了TAL任务中的视觉特征识别,但在将音频特征融合到这些框架中方面进展较少。本 paper 介绍了多分辨率音频-视觉特征融合(MRAV-FF),一种创新的方法,将不同时间分辨率的音频和视觉数据合并。我们的 approach 的核心是分层门控交叉注意力机制,该机制区分地权重音频信息在不同时间尺度上的 importance。这种技术不仅可以优化回归边界的精度,还可以增强分类信心。重要的是,MRAV-FF 是灵活的,使其与现有的 FPN TAL 架构兼容,并在可用音频数据时提供显著的性能提升。
URL
https://arxiv.org/abs/2310.03456