Abstract
Recurrent networks have been successful in analyzing temporal data and have been widely used for video analysis. However, for video face recognition, where the base CNNs trained on large-scale data already provide discriminative features, using Long Short-Term Memory (LSTM), a popular recurrent network, for feature learning could lead to overfitting and degrade the performance instead. We propose a Recurrent Embedding Aggregation Network (REAN) for set to set face recognition. Compared with LSTM, REAN is robust against overfitting because it only learns how to aggregate the pre-trained embeddings rather than learning representations from scratch. Compared with quality-aware aggregation methods, REAN can take advantage of the context information to circumvent the noise introduced by redundant video frames. Empirical results on three public domain video face recognition datasets, IJB-S, YTF, and PaSC show that the proposed REAN significantly outperforms naive CNN-LSTM structure and quality-aware aggregation methods.
Abstract (translated)
循环网络在分析时间数据方面取得了成功,并被广泛应用于视频分析。然而,对于视频人脸识别,基于大规模数据训练的CNN已经提供了识别特征,使用长期短期记忆(LSTM),这是一种流行的经常性网络,用于特征学习可能导致过度拟合和性能下降。提出了一种用于集对集人脸识别的循环嵌入聚合网络。与lstm相比,rean具有强大的抗过拟合能力,因为它只学习如何聚合预先培训的嵌入,而不是从头学习表示。与质量感知聚合方法相比,REAN可以利用上下文信息来规避冗余视频帧带来的噪声。对三个公共领域视频人脸识别数据集ijb-s、ytf和pasc的实验结果表明,所提出的rean明显优于简单的cnn-lstm结构和质量感知聚合方法。
URL
https://arxiv.org/abs/1904.12019