Abstract
We propose a content-based system for matching video and background music. The system aims to address the challenges in music recommendation for new users or new music give short-form videos. To this end, we propose a cross-modal framework VMCML that finds a shared embedding space between video and music representations. To ensure the embedding space can be effectively shared by both representations, we leverage CosFace loss based on margin-based cosine similarity loss. Furthermore, we establish a large-scale dataset called MSVD, in which we provide 390 individual music and the corresponding matched 150,000 videos. We conduct extensive experiments on Youtube-8M and our MSVD datasets. Our quantitative and qualitative results demonstrate the effectiveness of our proposed framework and achieve state-of-the-art video and music matching performance.
Abstract (translated)
我们提出了一种基于内容匹配视频和背景音乐的系统。该系统旨在解决对新用户或新音乐提供简短视频的音乐推荐面临的挑战。为此,我们提出了一种跨媒体框架VMCML,该框架在视频和音乐表示之间找到了共享嵌入空间。为了确保嵌入空间能够 effectively shared by both representations,我们利用基于 margin-based cosine similarity loss的CosFace损失。此外,我们建立了一个大型数据集MSVD,其中提供了390个单独的音乐视频,并匹配了150,000个相关视频。我们在Youtube-8M和MSVD数据集上进行了广泛的实验。我们的定量和定性结果证明了我们提出的框架的有效性,并实现了最先进的视频和音乐匹配表现。
URL
https://arxiv.org/abs/2303.12379