Abstract
Self-supervised learning leverages unlabeled data effectively, improving label efficiency and generalization to domains without labeled data. While recent work has studied generalization to more acoustic/linguistic domains, languages, and modalities, these investigations are limited to single-source speech with one primary speaker in the recording. This paper presents Cocktail HuBERT, a self-supervised learning framework that generalizes to mixture speech using a masked pseudo source separation objective. This objective encourages the model to identify the number of sources, separate and understand the context, and infer the content of masked regions represented as discovered units. Cocktail HuBERT outperforms state-of-the-art results with 69% lower WER on multi-speaker ASR, 31% lower DER on diarization, and is competitive on single- and multi-speaker tasks from SUPERB.
Abstract (translated)
自监督学习有效地利用了未标记数据,提高了标签效率和将未标记数据 domains,如更多的声学/语言学领域、语言和模式学 generalization 到其他领域的能力。尽管最近的工作研究了更广泛的声学/语言学领域、语言和模式学的泛化,但这些研究局限于在录制中只有一个主要说话人的单一源语音。本文介绍了鸡尾酒HuBERT,一种自监督学习框架,使用掩盖伪源分离目标将混合语音 generalization 到发现单元。这个目标鼓励模型确定来源数量、分离和理解上下文,并推断掩盖区域的内容,使其在多说话人 ASR 任务中比最先进的结果低69%,在去噪任务中低31%,并在SuperB中的单和多说话人任务中具有竞争力。
URL
https://arxiv.org/abs/2303.11131