Abstract
Emotional Text-To-Speech (TTS) is an important task in the development of systems (e.g., human-like dialogue agents) that require natural and emotional speech. Existing approaches, however, only aim to produce emotional TTS for seen speakers during training, without consideration of the generalization to unseen speakers. In this paper, we propose ZET-Speech, a zero-shot adaptive emotion-controllable TTS model that allows users to synthesize any speaker's emotional speech using only a short, neutral speech segment and the target emotion label. Specifically, to enable a zero-shot adaptive TTS model to synthesize emotional speech, we propose domain adversarial learning and guidance methods on the diffusion model. Experimental results demonstrate that ZET-Speech successfully synthesizes natural and emotional speech with the desired emotion for both seen and unseen speakers. Samples are at this https URL.
Abstract (translated)
情感文本语音合成(TTS)是开发需要自然和情感语音的系统(例如人类般的对话代理)的重要任务。然而,现有的方法仅旨在在训练期间为可见的演讲者生成情感TTS,而不考虑对未可见的演讲者进行泛化。在本文中,我们提出了ZET-Speech,一种零样本自适应情感控制TTS模型,可以使用简短的中性语音片段和目标情感标签来合成任何演讲者的情感语音。具体而言,为了使零样本自适应TTS模型能够合成情感语音,我们提出了扩散模型的域对抗学习和指导方法。实验结果表明,ZET-Speech成功地合成了自然和情感语音,对于可见和未可见的演讲者都具有所需的情感。样本代码位于这个https URL上。
URL
https://arxiv.org/abs/2305.13831