Paper Reading AI Learner

Multimodal Group Emotion Recognition In-the-wild Using Privacy-Compliant Features

2023-12-06 08:58:11
Anderson Augusma (M-PSI, SVH), Dominique Vaufreydaz (M-PSI), Frédérique Letué (SVH)

Abstract

This paper explores privacy-compliant group-level emotion recognition ''in-the-wild'' within the EmotiW Challenge 2023. Group-level emotion recognition can be useful in many fields including social robotics, conversational agents, e-coaching and learning analytics. This research imposes itself using only global features avoiding individual ones, i.e. all features that can be used to identify or track people in videos (facial landmarks, body poses, audio diarization, etc.). The proposed multimodal model is composed of a video and an audio branches with a cross-attention between modalities. The video branch is based on a fine-tuned ViT architecture. The audio branch extracts Mel-spectrograms and feed them through CNN blocks into a transformer encoder. Our training paradigm includes a generated synthetic dataset to increase the sensitivity of our model on facial expression within the image in a data-driven way. The extensive experiments show the significance of our methodology. Our privacy-compliant proposal performs fairly on the EmotiW challenge, with 79.24% and 75.13% of accuracy respectively on validation and test set for the best models. Noticeably, our findings highlight that it is possible to reach this accuracy level with privacy-compliant features using only 5 frames uniformly distributed on the video.

Abstract (translated)

本文探讨了在EmotiW挑战2023中实现隐私合规的团级情感识别“在野”的问题。团级情感识别在很多领域都有用,包括社交机器人学、对话机器人、电子辅导和学习分析等。这项研究通过仅使用全局特征来避免个体特征,即所有可以用来识别或跟踪视频中的人的面部特征(如面部表情、身体姿势、音频解码等)来实现。所提出的多模态模型由视频和音频分支组成,各分支之间存在注意力交叉。视频分支基于微调的ViT架构。音频分支提取Mel频谱图,并将其通过CNN块传递给Transformer编码器。我们 的训练范式包括生成合成数据集以在数据驱动的方式增加模型对图像中面部表情的灵敏度。丰富的实验结果表明,我们的方法的有效性。我们的隐私合规方案在EmotiW挑战中表现得相当好,在验证和测试集上的最佳模型分别达到79.24%和75.13%的准确度。值得注意的是,我们的研究结果强调了使用仅5帧均匀分布于视频上的隐私合规特征可以达到这种准确度水平。

URL

https://arxiv.org/abs/2312.05265

PDF

https://arxiv.org/pdf/2312.05265.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot