Abstract
The scarcity of speaker-annotated far-field speech presents a significant challenge in developing high-performance far-field speaker verification (SV) systems. While data augmentation using large-scale near-field speech has been a common strategy to address this limitation, the mismatch in acoustic environments between near-field and far-field speech significantly hinders the improvement of far-field SV effectiveness. In this paper, we propose an adaptive speech augmentation approach leveraging NaturalSpeech3, a pre-trained foundation text-to-speech (TTS) model, to convert near-field speech into far-field speech by incorporating far-field acoustic ambient noise for data augmentation. Specifically, we utilize FACodec from NaturalSpeech3 to decompose the speech waveform into distinct embedding subspaces-content, prosody, speaker, and residual (acoustic details) embeddings-and reconstruct the speech waveform from these disentangled representations. In our method, the prosody, content, and residual embeddings of far-field speech are combined with speaker embeddings from near-field speech to generate augmented pseudo far-field speech that maintains the speaker identity from the out-domain near-field speech while preserving the acoustic environment of the in-domain far-field speech. This approach not only serves as an effective strategy for augmenting training data for far-field speaker verification but also extends to cross-data augmentation for enrollment and test speech in evaluation this http URL results on FFSVC demonstrate that the adaptive data augmentation method significantly outperforms traditional approaches, such as random noise addition and reverberation, as well as other competitive data augmentation strategies.
Abstract (translated)
远场语音中的说话人标注数据稀缺,这对开发高性能的远场语音识别(SV)系统构成了重大挑战。虽然使用大规模近场语音进行数据增强是一种常见的策略来应对这一限制,但近场和远场语音之间声学环境的不匹配严重阻碍了远场SV有效性的提升。在本文中,我们提出了一种自适应语音增强方法,该方法利用NaturalSpeech3(一个预训练的基础文本到语音[TTS]模型)将近场语音转换为远场语音,并通过加入远场声学背景噪声来进行数据增强。具体来说,我们使用NaturalSpeech3的FACodec来分解语音波形成为不同的嵌入子空间——内容、韵律、说话人和残差(声学细节)嵌入,并从这些分离表示中重建语音波形。在我们的方法中,将远场语音的韵律、内容和残差嵌入与近场语音的说话人嵌入结合,生成增强伪远场语音,在保持出域近场语音说话人身份的同时保留了入域远场语音的声学环境。这种方法不仅为远场语音识别的数据增强提供了一种有效策略,还可以扩展到评估中的注册和测试语音之间的跨数据增强。FFSVC上的实验结果表明,自适应数据增强方法显著优于传统方法(如随机噪声添加和混响)及其他竞争性数据增强策略。
URL
https://arxiv.org/abs/2501.08691