Abstract
We propose a cross-modal co-attention model for continuous emotion recognition using visual-audio-linguistic information. The model consists of four blocks. The visual, audio, and linguistic blocks are used to learn the spatial-temporal features of the multimodal input. A co-attention block is designed to fuse the learned enbeddings with the multihead co-attention mechanism. The visual encoding from the visual block is concatenated with the attention feature to emphasize the visual information. To make full use of the data and alleviate over-fitting, the cross-validation is carried out on the training and validation set. The concordance correlation coefficient (CCC) centering is used to merge the results from each fold. The achieved CCC on validation set is 0.450 for valence and 0.651 for arousal, which significantly outperforms the baseline method with the corresponding CCC of 0.310 and 0.170, respectively. The code is available at this https URL.
Abstract (translated)
URL
https://arxiv.org/abs/2203.13031