Paper Reading AI Learner

CEFER: A Four Facets Framework based on Context and Emotion embedded features for Implicit and Explicit Emotion Recognition

2022-09-28 11:16:32
Fereshteh Khoshnam, Ahmad Baraani-Dastjerdi, M.J. Liaghatdar

Abstract

People's conduct and reactions are driven by their emotions. Online social media is becoming a great instrument for expressing emotions in written form. Paying attention to the context and the entire sentence help us to detect emotion from texts. However, this perspective inhibits us from noticing some emotional words or phrases in the text, particularly when the words express an emotion implicitly rather than explicitly. On the other hand, focusing only on the words and ignoring the context results in a distorted understanding of the sentence meaning and feeling. In this paper, we propose a framework that analyses text at both the sentence and word levels. We name it CEFER (Context and Emotion embedded Framework for Emotion Recognition). Our four approach facets are to extracting data by considering the entire sentence and each individual word simultaneously, as well as implicit and explicit emotions. The knowledge gained from these data not only mitigates the impact of flaws in the preceding approaches but also it strengthens the feature vector. We evaluate several feature spaces using BERT family and design the CEFER based on them. CEFER combines the emotional vector of each word, including explicit and implicit emotions, with the feature vector of each word based on context. CEFER performs better than the BERT family. The experimental results demonstrate that identifying implicit emotions are more challenging than detecting explicit emotions. CEFER, improves the accuracy of implicit emotion recognition. According to the results, CEFER perform 5% better than the BERT family in recognizing explicit emotions and 3% in implicit.

Abstract (translated)

URL

https://arxiv.org/abs/2209.13999

PDF

https://arxiv.org/pdf/2209.13999.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot