Abstract
We introduce the music Ternary Modalities Dataset (MTM Dataset), which is created by our group to learn joint representations among music three modalities in music information retrieval (MIR), including three types of cross-modal retrieval. Learning joint representations for cross-modal retrieval among three modalities has been limited because of the limited availability of large dataset including three or more modalities. The goal of MTM Dataset collection is to overcome the constraints by extending music notes to sheet music and music audio, and build music-note and syllable fine grained alignment, such that the dataset can be used to learn joint representation across multimodal music data. The MTM Dataset provides three modalities: sheet music, lyrics and music audio and their feature extracted by pre-trained models. In this paper, we describe the dataset and how it was built, and evaluate some baselines for cross-modal retrieval tasks. The dataset and usage examples are available at this https URL.
Abstract (translated)
URL
https://arxiv.org/abs/2012.00290