Abstract
Learning high-quality video representation has shown significant applications in computer vision and remains challenging. Previous work based on mask autoencoders such as ImageMAE and VideoMAE has proven the effectiveness of learning representations in images and videos through reconstruction strategy in the visual modality. However, these models exhibit inherent limitations, particularly in scenarios where extracting features solely from the visual modality proves challenging, such as when dealing with low-resolution and blurry original videos. Based on this, we propose AV-MaskEnhancer for learning high-quality video representation by combining visual and audio information. Our approach addresses the challenge by demonstrating the complementary nature of audio and video features in cross-modality content. Moreover, our result of the video classification task on the UCF101 dataset outperforms the existing work and reaches the state-of-the-art, with a top-1 accuracy of 98.8% and a top-5 accuracy of 99.9%.
Abstract (translated)
学习高质量的视频表示在计算机视觉方面已经展示了广泛应用,但仍然具有挑战性。基于掩码自编码器的工作,如ImageMAE和VideoMAE,已经证明了通过视觉 modality 的重构策略在图像和视频学习中表示的有效性。然而,这些模型表现出固有的限制,特别是在仅从视觉modality 中提取特征的情况下,例如处理低分辨率和模糊的原始视频时。基于这一点,我们提出了AV-MaskEnhancer,以学习通过视觉和音频信息结合实现的高质量视频表示。我们的方法解决了挑战,通过展示跨modality 内容中音频和视频特征的互补性质。此外,我们在UCF101数据集上的视频分类任务的结果优于现有工作,达到最先进的水平,其中1项准确率为98.8%,5项准确率为99.9%。
URL
https://arxiv.org/abs/2309.08738