Abstract
We present a method for fine-grained control over music generation through inference-time interventions on an autoregressive generative music transformer called MusicGen. Our approach enables timbre transfer, style transfer, and genre fusion by steering the residual stream using weights of linear probes trained on it, or by steering the attention layer activations in a similar manner. We observe that modelling this as a regression task provides improved performance, hypothesizing that the mean-squared-error better preserve meaningful directional information in the activation space. Combined with the global conditioning offered by text prompts in MusicGen, our method provides both global and local control over music generation. Audio samples illustrating our method are available at our demo page.
Abstract (translated)
我们提出了一种通过在推理阶段干预自回归生成音乐变压器(称为MusicGen)来实现对音乐生成精细控制的方法。我们的方法可以通过使用在线性探针上训练的权重引导残差流,或者以类似方式引导注意力层激活,从而实现音色转换、风格迁移和流派融合。我们观察到将此过程建模为回归任务可以提供更好的性能,假设均方误差更能保留激活空间中的有意义的方向信息。结合MusicGen中由文本提示提供的全局条件控制,我们的方法提供了对音乐生成的局部和全局控制。在我们的演示页面上可获取展示我们方法的音频样本。
URL
https://arxiv.org/abs/2506.10225