Abstract
This paper introduces a novel convolutional neural networks (CNN) framework tailored for end-to-end audio deep learning models, presenting advancements in efficiency and explainability. By benchmarking experiments on three standard speech emotion recognition datasets with five-fold cross-validation, our framework outperforms Mel spectrogram features by up to seven percent. It can potentially replace the Mel-Frequency Cepstral Coefficients (MFCC) while remaining lightweight. Furthermore, we demonstrate the efficiency and interpretability of the front-end layer using the PhysioNet Heart Sound Database, illustrating its ability to handle and capture intricate long waveform patterns. Our contributions offer a portable solution for building efficient and interpretable models for raw waveform data.
Abstract (translated)
本文提出了一种针对端到端音频深度学习模型的全新卷积神经网络(CNN)框架,在效率和可解释性方面取得了显著的提升。通过在三个标准语音情感识别数据集上进行五倍交叉验证,我们的框架在 Mel 频谱图特征上提升了至多 7% 的性能。它有可能取代 Mel-Frequency Cepstral Coefficients(MFCC),同时保持轻量级。此外,我们通过 PhysioNet 心声数据库展示了前馈层的效率和可解释性,证明了其处理和捕捉复杂长波形模式的能力。我们的贡献为构建高效且可解释的原始波形数据模型提供了便携式解决方案。
URL
https://arxiv.org/abs/2405.01815