Abstract
Nowadays, forgery faces pose pressing security concerns over fake news, fraud, impersonation, etc. Despite the demonstrated success in intra-domain face forgery detection, existing detection methods lack generalization capability and tend to suffer from dramatic performance drops when deployed to unforeseen domains. To mitigate this issue, this paper designs a more general fake face detection model based on the vision transformer(ViT) architecture. In the training phase, the pretrained ViT weights are freezed, and only the Low-Rank Adaptation(LoRA) modules are updated. Additionally, the Single Center Loss(SCL) is applied to supervise the training process, further improving the generalization capability of the model. The proposed method achieves state-of-the-arts detection performances in both cross-manipulation and cross-dataset evaluations.
Abstract (translated)
现如今,伪造的面孔对假新闻、欺诈、模仿等现实问题提出了紧急的安全担忧。尽管在域内面部伪造检测方面已经取得了成功,但现有的检测方法缺乏泛化能力,并且往往在未知域中表现不佳。为了缓解这个问题,本文基于视觉transformer(ViT)架构设计了一个更为通用的伪造面检测模型。在训练阶段,预先训练的ViT权重被冻结,仅LoRA模块进行更新。此外,Single Center Loss(SCL)用于监督训练过程,进一步提高了模型的泛化能力。该方法在跨操纵和跨数据集评估中都取得了最先进的检测性能。
URL
https://arxiv.org/abs/2303.00917