Abstract
Recent codec-based language models~(LMs) have revolutionized text-to-speech~(TTS). However, since standard codecs tightly couple timbre and prosody, continuation-based LMs inevitably replicate this entanglement, hindering independent control. Recent efforts attempt to break this entanglement via codec design, but insufficient decoupling remains a critical bottleneck. To tackle this challenge, we propose DisCo-Speech, a zero-shot controllable TTS framework that enables prosody control and voice cloning via a disentangled speech codec (DisCodec) and an LM-based generator. The core component, DisCodec, contains two core stages: 1) Tri-factor disentanglement, which explicitly factorizes speech into content, prosody, and timbre subspaces via parallel encoders and hybrid losses; and 2) Fusion and reconstruction, which fuses content and prosody into unified content-prosody tokens suitable for LM prediction, while jointly optimizing reconstruction quality to resolve the disentanglement-reconstruction trade-off. With this design, the LM performs prosodic continuation from a style prompt while the decoder handles target timbre injection, enabling flexible zero-shot control. Experiments show that DisCo-Speech matches state-of-the-art voice cloning performance while outperforming baselines in zero-shot prosody control. By resolving the core entanglement at the codec level, DisCo-Speech provides a robust foundation for controllable speech synthesis. Audio samples are available at this https URL, and the code and weights will be released at the same link.
Abstract (translated)
最近基于编解码器的语言模型(LMs)革新了文本到语音(TTS)技术。然而,由于标准编解码器将音色和语调紧密耦合在一起,基于续连的LM不可避免地复制这种纠缠状态,从而阻碍了独立控制。近期的努力试图通过设计新的编解码器来打破这种纠缠关系,但解耦不足仍然是一个关键瓶颈。为了解决这一挑战,我们提出了一种零样本可控TTS框架——DisCo-Speech,该框架通过一种解耦的语音编解码器(DisCodec)和基于LM的生成器实现了语调控制和声音克隆。核心组件DisCodec包含两个主要阶段:1)三因子解耦,它通过并行编码器和混合损失显式地将语音分解为内容、语调和音色子空间;2)融合与重构,在此过程中,内容和语调被融合成适合LM预测的统一内容-语调令牌,并同时优化重构质量以解决解耦与重构之间的权衡问题。通过这种设计,LM可以从风格提示中进行语调续连,而解码器负责目标音色注入,从而实现了灵活的零样本控制。实验表明,DisCo-Speech在声音克隆性能上达到了最先进的水平,并且在零样本语调控制方面超越了基线模型。通过在编解码器层面解决核心纠缠问题,DisCo-Speech为可控语音合成提供了一个稳健的基础。音频示例可在[此链接](请替换实际URL)获取,代码和权重将在同一链接发布。
URL
https://arxiv.org/abs/2512.13251