Abstract
Speech event detection is crucial for multimedia retrieval, involving the tagging of both semantic and acoustic events. Traditional ASR systems often overlook the interplay between these events, focusing solely on content, even though the interpretation of dialogue can vary with environmental context. This paper tackles two primary challenges in speech event detection: the continual integration of new events without forgetting previous ones, and the disentanglement of semantic from acoustic events. We introduce a new task, continual event detection from speech, for which we also provide two benchmark datasets. To address the challenges of catastrophic forgetting and effective disentanglement, we propose a novel method, 'Double Mixture.' This method merges speech expertise with robust memory mechanisms to enhance adaptability and prevent forgetting. Our comprehensive experiments show that this task presents significant challenges that are not effectively addressed by current state-of-the-art methods in either computer vision or natural language processing. Our approach achieves the lowest rates of forgetting and the highest levels of generalization, proving robust across various continual learning sequences. Our code and data are available at https://anonymous.4open.science/status/Continual-SpeechED-6461.
Abstract (translated)
语音事件检测对于多媒体检索至关重要,涉及对语义和音频事件进行标记。传统的ASR系统通常忽视这些事件之间的相互作用,仅关注内容,尽管对话的解释可能会因环境背景而有所不同。本文解决了两个主要的语音事件检测挑战:持续集成新事件,同时不遗忘以前的事件,以及语义和音频事件的分离。我们还提供了两个基准数据集,用于说明这个任务。为了应对灾难性遗忘和有效分离的问题,我们提出了名为“双混合”的新方法。这种方法将语音专业知识与健壮的存储机制相结合,提高了可塑性和防止遗忘。我们全面的实验结果表明,这个任务对计算机视觉和自然语言处理当前方法提出了严重挑战。我们的方法实现了最低的遗忘率和最高的泛化水平,证明其健壮性在各种连续学习序列中。我们的代码和数据可以从https://anonymous.4open.science/status/Continual-SpeechED-6461获取。
URL
https://arxiv.org/abs/2404.13289