Abstract
This paper presents a novel approach to processing multimodal data for dynamic emotion recognition, named as the Multimodal Masked Autoencoder for Dynamic Emotion Recognition (MultiMAE-DER). The MultiMAE-DER leverages the closely correlated representation information within spatiotemporal sequences across visual and audio modalities. By utilizing a pre-trained masked autoencoder model, the MultiMAEDER is accomplished through simple, straightforward finetuning. The performance of the MultiMAE-DER is enhanced by optimizing six fusion strategies for multimodal input sequences. These strategies address dynamic feature correlations within cross-domain data across spatial, temporal, and spatiotemporal sequences. In comparison to state-of-the-art multimodal supervised learning models for dynamic emotion recognition, MultiMAE-DER enhances the weighted average recall (WAR) by 4.41% on the RAVDESS dataset and by 2.06% on the CREMAD. Furthermore, when compared with the state-of-the-art model of multimodal self-supervised learning, MultiMAE-DER achieves a 1.86% higher WAR on the IEMOCAP dataset.
Abstract (translated)
本文提出了一种名为多模态掩码自动编码器(Multimodal Masked Autoencoder for Dynamic Emotion Recognition,MMMAER)的新方法来处理多模态数据以实现动态情感识别。MultiMAE-DER 通过利用预训练的掩码自动编码器模型,通过简单的直接微调实现。通过优化六种融合策略来提高MultiMAE-DER的表现,这些策略处理跨领域数据中的动态特征相关性。与用于动态情感识别的多模态监督学习模型相比,MultiMAE-DER在RAVDESS数据集上提高了4.41%的加权平均召回(WAR),在CREMAD数据集上提高了2.06%的WAR。此外,与用于多模态自监督学习的最先进模型相比,MultiMAE-DER在IEMOCAP数据集上实现了1.86%的WAR提升。
URL
https://arxiv.org/abs/2404.18327