Abstract
This paper explores privacy-compliant group-level emotion recognition ''in-the-wild'' within the EmotiW Challenge 2023. Group-level emotion recognition can be useful in many fields including social robotics, conversational agents, e-coaching and learning analytics. This research imposes itself using only global features avoiding individual ones, i.e. all features that can be used to identify or track people in videos (facial landmarks, body poses, audio diarization, etc.). The proposed multimodal model is composed of a video and an audio branches with a cross-attention between modalities. The video branch is based on a fine-tuned ViT architecture. The audio branch extracts Mel-spectrograms and feed them through CNN blocks into a transformer encoder. Our training paradigm includes a generated synthetic dataset to increase the sensitivity of our model on facial expression within the image in a data-driven way. The extensive experiments show the significance of our methodology. Our privacy-compliant proposal performs fairly on the EmotiW challenge, with 79.24% and 75.13% of accuracy respectively on validation and test set for the best models. Noticeably, our findings highlight that it is possible to reach this accuracy level with privacy-compliant features using only 5 frames uniformly distributed on the video.
Abstract (translated)
本文探讨了在EmotiW挑战2023中实现隐私合规的团级情感识别“在野”的问题。团级情感识别在很多领域都有用,包括社交机器人学、对话机器人、电子辅导和学习分析等。这项研究通过仅使用全局特征来避免个体特征,即所有可以用来识别或跟踪视频中的人的面部特征(如面部表情、身体姿势、音频解码等)来实现。所提出的多模态模型由视频和音频分支组成,各分支之间存在注意力交叉。视频分支基于微调的ViT架构。音频分支提取Mel频谱图,并将其通过CNN块传递给Transformer编码器。我们 的训练范式包括生成合成数据集以在数据驱动的方式增加模型对图像中面部表情的灵敏度。丰富的实验结果表明,我们的方法的有效性。我们的隐私合规方案在EmotiW挑战中表现得相当好,在验证和测试集上的最佳模型分别达到79.24%和75.13%的准确度。值得注意的是,我们的研究结果强调了使用仅5帧均匀分布于视频上的隐私合规特征可以达到这种准确度水平。
URL
https://arxiv.org/abs/2312.05265