Abstract
The proliferation of deepfake videos, synthetic media produced through advanced Artificial Intelligence techniques has raised significant concerns across various sectors, encompassing realms such as politics, entertainment, and security. In response, this research introduces an innovative and streamlined model designed to classify deepfake videos generated by five distinct encoders adeptly. Our approach not only achieves state of the art performance but also optimizes computational resources. At its core, our solution employs part of a VGG19bn as a backbone to efficiently extract features, a strategy proven effective in image-related tasks. We integrate a Capsule Network coupled with a Spatial Temporal attention mechanism to bolster the model's classification capabilities while conserving resources. This combination captures intricate hierarchies among features, facilitating robust identification of deepfake attributes. Delving into the intricacies of our innovation, we introduce an existing video level fusion technique that artfully capitalizes on temporal attention mechanisms. This mechanism serves to handle concatenated feature vectors, capitalizing on the intrinsic temporal dependencies embedded within deepfake videos. By aggregating insights across frames, our model gains a holistic comprehension of video content, resulting in more precise predictions. Experimental results on an extensive benchmark dataset of deepfake videos called DFDM showcase the efficacy of our proposed method. Notably, our approach achieves up to a 4 percent improvement in accurately categorizing deepfake videos compared to baseline models, all while demanding fewer computational resources.
Abstract (translated)
深度伪造视频的泛滥已经引发了一系列 sector(包括政治、娱乐和安保领域)的广泛关注。为了应对这一问题,这项研究介绍了一种创新且高效的模型,用于对五种不同编码器生成的深度伪造视频进行分类。我们的方法不仅在性能上实现了最先进的水平,而且在计算资源上进行了优化。 本质上,我们的解决方案采用了一个 VGG19bn 作为骨干网络,以高效地提取特征,这是一种在图像相关任务中已被证明有效的策略。我们结合了一个胶囊网络和一个空间时间注意力机制,以增强模型的分类能力,同时保留资源。这种组合捕捉了特征之间的复杂层次结构,从而有助于准确识别深度伪造属性。 深入研究我们的创新,我们介绍了一种现有的视频级别融合技术,巧妙地利用了时间注意力机制。这一机制用于处理连接特征向量,利用了深度伪造视频内在的时间依赖关系。通过跨越帧的见解汇总,我们的模型获得了对视频内容的全面理解,从而实现了更精确的预测。 在一个名为 DFDM 的广泛基准数据集上对深度伪造视频进行实验测试的结果展示了我们所提出方法的效力。值得注意的是,与基线模型相比,我们的方法将准确分类深度伪造视频的能力提高了4%。同时,这一方法在计算资源上要求更少。
URL
https://arxiv.org/abs/2311.03782