Abstract
Deepfake technology, derived from deep learning, seamlessly inserts individuals into digital media, irrespective of their actual participation. Its foundation lies in machine learning and Artificial Intelligence (AI). Initially, deepfakes served research, industry, and entertainment. While the concept has existed for decades, recent advancements render deepfakes nearly indistinguishable from reality. Accessibility has soared, empowering even novices to create convincing deepfakes. However, this accessibility raises security concerns.The primary deepfake creation algorithm, GAN (Generative Adversarial Network), employs machine learning to craft realistic images or videos. Our objective is to utilize CNN (Convolutional Neural Network) and CapsuleNet with LSTM to differentiate between deepfake-generated frames and originals. Furthermore, we aim to elucidate our model's decision-making process through Explainable AI, fostering transparent human-AI relationships and offering practical examples for real-life scenarios.
Abstract (translated)
Deepfake技术源于深度学习,无缝地将个人插入数字媒体,不受其实际参与的影响。其基础是机器学习和人工智能(AI)。最初,deepfakes服务于研究、生产和娱乐领域。虽然这个概念已经存在了几十年,但最近的技术进步使得deepfakes几乎无法与现实区分开来。访问性飙升,甚至让新手能够创建令人信服的deepfakes。然而,这种可访问性也引发了一些安全问题。 主要的deepfake创建算法GAN(生成对抗网络)使用机器学习来制作真实的图像或视频。我们的目标是利用卷积神经网络(CNN)和胶囊网络(CapsuleNet)与LSTM区分深度伪造的帧和原始内容。此外,我们还希望通过可解释AI阐明我们的模型的决策过程,促进透明的人机关系,并为实际场景提供实际例子。
URL
https://arxiv.org/abs/2404.12841