Abstract
Symbolic music analysis tasks are often performed by models originally developed for Natural Language Processing, such as Transformers. Such models require the input data to be represented as sequences, which is achieved through a process of tokenization. Tokenization strategies for symbolic music often rely on absolute MIDI values to represent pitch information. However, music research largely promotes the benefit of higher-level representations such as melodic contour and harmonic relations for which pitch intervals turn out to be more expressive than absolute pitches. In this work, we introduce a general framework for building interval-based tokenizations. By evaluating these tokenizations on three music analysis tasks, we show that such interval-based tokenizations improve model performances and facilitate their explainability.
Abstract (translated)
符号音乐分析任务通常由为自然语言处理(如Transformer模型)开发的模型来执行。这些模型要求输入数据以序列形式表示,这是通过分词过程实现的。对于符号音乐来说,分词策略往往依赖于绝对MIDI值来表示音高信息。然而,音乐研究广泛认为更高层次的表示方式,例如旋律轮廓和和声关系,比绝对音高低效得多。这些高层次的表示中,音程(即两个音符之间的间隔)更具表现力。 在本文工作中,我们提出了一种基于音程构建分词策略的通用框架。通过评估这些音程分词策略在三项音乐分析任务中的性能,我们表明这种基于音程的方法能够提升模型的表现,并有助于其可解释性。
URL
https://arxiv.org/abs/2501.04630