Abstract
This paper presents our Facial Action Units (AUs) recognition submission to the fifth Affective Behavior Analysis in-the-wild Competition (ABAW). Our approach consists of three main modules: (i) a pre-trained facial representation encoder which produce a strong facial representation from each input face image in the input sequence; (ii) an AU-specific feature generator that specifically learns a set of AU features from each facial representation; and (iii) a spatio-temporal graph learning module that constructs a spatio-temporal graph representation. This graph representation describes AUs contained in all frames and predicts the occurrence of each AU based on both the modeled spatial information within the corresponding face and the learned temporal dynamics among frames. The experimental results show that our approach outperformed the baseline and the spatio-temporal graph representation learning allows the model to generate the best results among all ablation systems.
Abstract (translated)
本 paper 介绍了我们对 facial action units (AUs) 在第五项野生情感行为分析竞赛(ABAW)中的识别提交。我们的算法由三个主要模块组成:(i) 预先训练的面部表示编码器,从输入序列中的每个输入面部图像生成强面部表示;(ii) AU 特定的特征生成器,从每个面部表示中专门学习一组 AU 特征;(iii) 时间域图学习模块,构建时间域图表示。这个图表示了所有帧中的 AU 内容和根据对应面部模型空间信息以及帧中学习的时间动态预测每个 AU 的发生情况。实验结果显示,我们的算法超越了基准模型,和时间域图表示学习使得模型能够在所有 ablation 系统中生成最好的结果。
URL
https://arxiv.org/abs/2303.10644