Abstract
Emotion recognition is a critical task in human-computer interaction, enabling more intuitive and responsive systems. This study presents a multimodal emotion recognition system that combines low-level information from audio and text, leveraging both Convolutional Neural Networks (CNNs) and Bidirectional Long Short-Term Memory Networks (BiLSTMs). The proposed system consists of two parallel networks: an Audio Block and a Text Block. Mel Frequency Cepstral Coefficients (MFCCs) are extracted and processed by a BiLSTM network and a 2D convolutional network to capture low-level intrinsic and extrinsic features from speech. Simultaneously, a combined BiLSTM-CNN network extracts the low-level sequential nature of text from word embeddings corresponding to the available audio. This low-level information from speech and text is then concatenated and processed by several fully connected layers to classify the speech emotion. Experimental results demonstrate that the proposed EmoTech accurately recognizes emotions from combined audio and text inputs, achieving an overall accuracy of 84%. This solution outperforms previously proposed approaches for the same dataset and modalities.
Abstract (translated)
情感识别是人机交互中的一个关键任务,它使系统更加直观和响应迅速。本研究提出了一种多模态情感识别系统,该系统结合了音频和文本的低级信息,并利用卷积神经网络(CNN)和双向长短时记忆网络(BiLSTM)。所提出的系统由两个并行网络组成:一个音频块和一个文本块。通过使用BiLSTM网络和2D卷积网络处理从梅尔频率倒谱系数(MFCCs)中提取的数据,该系统能够捕捉到语音中的低级内在和外在特征。同时,结合的BiLSTM-CNN网络则会根据可用音频对应的文字嵌入来抽取文本的低级序列特性。随后,来自语音和文本的这些低级信息会被连接起来并通过全连接层进行处理,以对语音情感进行分类。 实验结果表明,所提出的EmoTech系统能够准确地从结合了音频和文本输入的情感中识别出情绪,并实现了84%的整体准确性。该解决方案在使用相同数据集和模态时优于之前提出的方法。
URL
https://arxiv.org/abs/2501.12674