Abstract
Facial action unit (AU) detection and face alignment are two highly correlated tasks since facial landmarks can provide precise AU locations to facilitate the extraction of meaningful local features for AU detection. Most existing AU detection works often treat face alignment as a preprocessing and handle the two tasks independently. In this paper, we propose a novel end-to-end deep learning framework for joint AU detection and face alignment, which has not been explored before. In particular, multi-scale shared features are learned firstly, and high-level features of face alignment are fed into AU detection. Moreover, to extract precise local features, we propose an adaptive attention learning module to refine the attention map of each AU adaptively. Finally, the assembled local features are integrated with face alignment features and global features for AU detection. Experiments on BP4D and DISFA benchmarks demonstrate that our framework significantly outperforms the state-of-the-art methods for AU detection.
Abstract (translated)
面部动作单元(AU)检测和面部对齐是两个高度相关的任务,因为面部地标可以提供精确的AU位置以便于提取用于AU检测的有意义的局部特征。大多数现有的AU检测工作通常将面部对齐视为预处理并独立处理这两个任务。在本文中,我们提出了一种新的端对端深度学习框架,用于联合AU检测和面部对齐,这在以前尚未探索过。特别地,首先学习多尺度共享特征,并且将面部对齐的高级特征馈送到AU检测中。此外,为了提取精确的局部特征,我们提出了一种自适应注意力学习模块,以自适应地细化每个AU的注意力图。最后,组装的局部特征与面部对齐特征和用于AU检测的全局特征集成在一起。 BP4D和DISFA基准测试的实验表明,我们的框架明显优于AU检测的最先进方法。
URL
https://arxiv.org/abs/1803.05588