Abstract
Speech data from different domains has distinct acoustic and linguistic characteristics. It is common to train a single multidomain model such as a Conformer transducer for speech recognition on a mixture of data from all domains. However, changing data in one domain or adding a new domain would require the multidomain model to be retrained. To this end, we propose a framework called modular domain adaptation (MDA) that enables a single model to process multidomain data while keeping all parameters domain-specific, i.e., each parameter is only trained by data from one domain. On a streaming Conformer transducer trained only on video caption data, experimental results show that an MDA-based model can reach similar performance as the multidomain model on other domains such as voice search and dictation by adding per-domain adapters and per-domain feed-forward networks in the Conformer encoder.
Abstract (translated)
来自不同领域的声音数据具有不同的声学和语言学特征。通常,对单个多领域模型,如语音合成转换器,进行训练,使用所有领域的混合数据。然而,改变一个领域的数据或添加一个新领域将需要重新训练多领域模型。为此,我们提出了一种框架,称为模块化 domain 适应(MDA),它使一个模型能够处理多领域数据,同时保持所有参数领域特定,即每个参数仅由一个领域的数据训练。在仅训练视频标题数据的动态语音合成转换器上,实验结果显示,基于MDA的模型可以在其他领域,如语音搜索和口语命令,达到与多领域模型类似的性能,通过在语音合成编码器中添加每个领域的适配器和领域适配器。
URL
https://arxiv.org/abs/2305.13408