Abstract
Ensuring intelligible speech communication for hearing assistive devices in low-latency scenarios presents significant challenges in terms of speech enhancement, coding and transmission. In this paper, we propose novel solutions for low-latency joint speech transmission and enhancement, leveraging deep neural networks (DNNs). Our approach integrates two state-of-the-art DNN architectures for low-latency speech enhancement and low-latency analog joint source-channel-based transmission, creating a combined low-latency system and jointly training both systems in an end-to-end approach. Due to the computational demands of the enhancement system, this order is suitable when high computational power is unavailable in the decoder, like hearing assistive devices. The proposed system enables the configuration of total latency, achieving high performance even at latencies as low as 3 ms, which is typically challenging to attain. The simulation results provide compelling evidence that a joint enhancement and transmission system is superior to a simple concatenation system in diverse settings, encompassing various wireless channel conditions, latencies, and background noise scenarios.
Abstract (translated)
为了在低延迟场景中确保听觉辅助设备的可理解语音通信,我们在本文中提出了新的解决方案,利用深度神经网络(DNN)进行低延迟语音增强和传输。我们的方法将两个最先进的DNN架构(低延迟语音增强和低延迟模拟联合源-通道-基于传输)集成到一个综合的低延迟系统中,并使用端到端方法同时训练这两个系统。由于增强系统的计算需求较高,当解码器的计算能力无法满足要求时(例如助听辅助设备),这种顺序是合适的。所提出的系统可以实现总延迟的配置,即使在延迟高达3毫秒时,性能也仍然很高,这是通常很难达到的。仿真结果提供了有力的证据,表明联合增强和传输系统在各种场景中优于简单的串联系统,包括各种无线信道条件、延迟和背景噪声场景。
URL
https://arxiv.org/abs/2404.19375