Abstract
Many music AI models learn a map between music content and human-defined labels. However, many annotations, such as chords, can be naturally expressed within the music modality itself, e.g., as sequences of symbolic notes. This observation enables both understanding tasks (e.g., chord recognition) and conditional generation tasks (e.g., chord-conditioned melody generation) to be unified under a music-for-music sequence modeling paradigm. In this work, we propose parameter-efficient solutions for a variety of symbolic music-for-music tasks. The high-level idea is that (1) we utilize a pretrained Language Model (LM) for both the reference and the target sequence and (2) we link these two LMs via a lightweight adapter. Experiments show that our method achieves superior performance among different tasks such as chord recognition, melody generation, and drum track generation. All demos, code and model weights are publicly available.
Abstract (translated)
许多音乐AI模型学习了音乐内容与人类定义标签之间的映射关系。然而,许多注释(如和弦)可以在音乐本身的表现形式中自然地表达出来,例如以音符序列的形式。这一观察结果使得理解和生成任务(比如和弦识别以及基于和弦的旋律生成)可以统一在一种“音乐为音乐”的序列建模范式下进行。在这项工作中,我们提出了一种适用于各种符号化音乐-音乐任务的有效参数解决方案。我们的主要思路是:(1)利用一个预训练的语言模型处理参考序列和目标序列;(2)通过轻量级适配器将这两个语言模型连接起来。实验结果表明,在诸如和弦识别、旋律生成及鼓轨生成等不同任务中,我们的方法均表现出优越的性能。所有演示、代码和模型权重均可公开获取。
URL
https://arxiv.org/abs/2506.15548