Paper Reading AI Learner

Disentangled Variational Autoencoder for Emotion Recognition in Conversations

2023-05-23 13:50:06
Kailai Yang, Tianlin Zhang, Sophia Ananiadou

Abstract

In Emotion Recognition in Conversations (ERC), the emotions of target utterances are closely dependent on their context. Therefore, existing works train the model to generate the response of the target utterance, which aims to recognise emotions leveraging contextual information. However, adjacent response generation ignores long-range dependencies and provides limited affective information in many cases. In addition, most ERC models learn a unified distributed representation for each utterance, which lacks interpretability and robustness. To address these issues, we propose a VAD-disentangled Variational AutoEncoder (VAD-VAE), which first introduces a target utterance reconstruction task based on Variational Autoencoder, then disentangles three affect representations Valence-Arousal-Dominance (VAD) from the latent space. We also enhance the disentangled representations by introducing VAD supervision signals from a sentiment lexicon and minimising the mutual information between VAD distributions. Experiments show that VAD-VAE outperforms the state-of-the-art model on two datasets. Further analysis proves the effectiveness of each proposed module and the quality of disentangled VAD representations. The code is available at this https URL.

Abstract (translated)

在对话中的情感识别(ERC)中,目标言论的情感取决于其上下文。因此,现有工作训练模型生成目标言论的反应,旨在利用上下文信息识别情感。然而,相邻反应生成忽略长距离依赖并在某些情况下提供有限的情感信息。此外,大多数ERC模型学习每个言论的统一分布式表示,缺乏解释性和稳定性。为了解决这些问题,我们提出了一种VAD分离的变分自编码器(VAD-VAE),它首先提出了基于变分自编码的目标言论重建任务,然后从隐空间分离三个影响表示valence-arousal-dominance(VAD)。我们还增强分离表示,从情感词汇表引入VAD监督信号并最小化VAD分布之间的互信息。实验表明,VAD-VAE在两个数据集上比最先进的模型表现更好。进一步分析证明了每个提议模块的有效性和分离的VAD表示质量。代码在此https URL可用。

URL

https://arxiv.org/abs/2305.14071

PDF

https://arxiv.org/pdf/2305.14071.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot