Abstract
Recently, heatmap regression methods based on 1D landmark representations have shown prominent performance on locating facial landmarks. However, previous methods ignored to make deep explorations on the good potentials of 1D landmark representations for sequential and structural modeling of multiple landmarks to track facial landmarks. To address this limitation, we propose a Transformer architecture, namely 1DFormer, which learns informative 1D landmark representations by capturing the dynamic and the geometric patterns of landmarks via token communications in both temporal and spatial dimensions for facial landmark tracking. For temporal modeling, we propose a recurrent token mixing mechanism, an axis-landmark-positional embedding mechanism, as well as a confidence-enhanced multi-head attention mechanism to adaptively and robustly embed long-term landmark dynamics into their 1D representations; for structure modeling, we design intra-group and inter-group structure modeling mechanisms to encode the component-level as well as global-level facial structure patterns as a refinement for the 1D representations of landmarks through token communications in the spatial dimension via 1D convolutional layers. Experimental results on the 300VW and the TF databases show that 1DFormer successfully models the long-range sequential patterns as well as the inherent facial structures to learn informative 1D representations of landmark sequences, and achieves state-of-the-art performance on facial landmark tracking.
Abstract (translated)
近年来,基于1D地标表示的热力图回归方法在检测面部标志物方面表现出优异性能。然而,之前的的方法忽略了探索1D地标表示在多个标志物序列和结构建模中的潜在优势。为了克服这一局限,我们提出了一个Transformer架构,即1DFormer,通过在时间和空间维度上捕获标志物的动态和几何模式来学习有用的1D地标表示。对于时间建模,我们提出了一个循环标记混合机制、轴心标记位置嵌入机制以及一个增强置信的多头注意力机制,以适应和稳健地将长期标志物动态嵌入其1D表示中;对于结构建模,我们设计了组内和组间结构建模机制,通过空间维度上的1D卷积层将标志物的组件级和全局级结构模式编码为对1D表示的改进。在300VW和TF数据库上的实验结果表明,1DFormer成功地建模了长距离的时间序列模式以及固有面部结构,并获得了卓越的面部标志物跟踪性能。
URL
https://arxiv.org/abs/2311.00241