Abstract
The continuous improvement of human-computer interaction technology makes it possible to compute emotions. In this paper, we introduce our submission to the CVPR 2023 Competition on Affective Behavior Analysis in-the-wild (ABAW). Sentiment analysis in human-computer interaction should, as far as possible Start with multiple dimensions, fill in the single imperfect emotion channel, and finally determine the emotion tendency by fitting multiple results. Therefore, We exploited multimodal features extracted from video of different lengths from the competition dataset, including audio, pose and images. Well-informed emotion representations drive us to propose a Attention-based multimodal framework for emotion estimation. Our system achieves the performance of 0.361 on the validation dataset. The code is available at [this https URL].
Abstract (translated)
不断进步的人机交互技术使得计算情感变得可能。在本文中,我们介绍了我们参加CVPR 2023比赛,关于在户外进行情感行为分析(ABAW)的横向比较研究。在人机交互中,情感分析应该尽可能从多个维度开始,填充一个不完美的情感通道,最后通过多项式结果的匹配来确定情感趋势。因此,我们利用从比赛数据集不同长度的视频中提取的多种模式特征,包括音频、姿势和图像。经过良好的情感表示驱动,我们提出了一种基于注意力的多种模式框架,用于情感估计。我们的系统在验证数据集上表现出0.361的性能。代码可在[这个https URL]获取。
URL
https://arxiv.org/abs/2303.10421