Abstract
Leveraging both visual frames and audio has been experimentally proven effective to improve large-scale video classification. Previous research on video classification mainly focuses on the analysis of visual content among extracted video frames and their temporal feature aggregation. In contrast, multimodal data fusion is achieved by simple operators like average and concatenation. Inspired by the success of bilinear pooling in the visual and language fusion, we introduce multi-modal factorized bilinear pooling (MFB) to fuse visual and audio representations. We combine MFB with different video-level features and explore its effectiveness in video classification. Experimental results on the challenging Youtube-8M v2 dataset demonstrate that MFB significantly outperforms simple fusion methods in large-scale video classification.
Abstract (translated)
实验证明,利用视觉帧和音频可以有效地改善大规模视频分类。以往对视频分类的研究主要集中在对提取的视频帧之间的视觉内容及其时间特征聚合的分析。相比之下,多模态数据融合是通过平均和串联等简单的运算符实现的。受视觉和语言融合中双线性池成功的启发,我们引入了多模态分解双线性池(MFB)来融合视觉和音频表示。我们将MFB与不同的视频级功能相结合,并探索其在视频分类方面的有效性。在具有挑战性的Youtube-8M v2数据集上的实验结果表明,MFB在大规模视频分类中明显优于简单的融合方法。
URL
https://arxiv.org/abs/1809.05848