Abstract
Detection of face forgery videos remains a formidable challenge in the field of digital forensics, especially the generalization to unseen datasets and common perturbations. In this paper, we tackle this issue by leveraging the synergy between audio and visual speech elements, embarking on a novel approach through audio-visual speech representation learning. Our work is motivated by the finding that audio signals, enriched with speech content, can provide precise information effectively reflecting facial movements. To this end, we first learn precise audio-visual speech representations on real videos via a self-supervised masked prediction task, which encodes both local and global semantic information simultaneously. Then, the derived model is directly transferred to the forgery detection task. Extensive experiments demonstrate that our method outperforms the state-of-the-art methods in terms of cross-dataset generalization and robustness, without the participation of any fake video in model training. Code is available at this https URL.
Abstract (translated)
在数字取证领域,面部伪造视频的检测仍然是一个艰巨的挑战,特别是在面对未见过的数据集和常见扰动时。本文通过利用音频与视觉语音元素之间的协同作用来应对这一问题,并提出了一种新的方法:基于视听语音表示学习的方法。我们的研究受到这样一个发现的启发,即包含丰富语音内容的音频信号可以提供精确的信息,有效反映面部动作。 为此,我们首先在真实视频上通过对自我监督掩码预测任务的学习,获得精确的视听语音表示,这一过程同时编码了局部和全局语义信息。然后,我们将得出的模型直接应用到伪造检测的任务中去。广泛的实验表明,在没有使用任何假视频进行模型训练的情况下,我们的方法在跨数据集泛化能力和鲁棒性方面优于当前最先进的方法。 代码可在[此处](https://this https URL)获得。(请注意,URL中的“this”应替换为实际的链接地址)
URL
https://arxiv.org/abs/2508.09913