Abstract
Deepfake speech represents a real and growing threat to systems and society. Many detectors have been created to aid in defense against speech deepfakes. While these detectors implement myriad methodologies, many rely on low-level fragments of the speech generation process. We hypothesize that breath, a higher-level part of speech, is a key component of natural speech and thus improper generation in deepfake speech is a performant discriminator. To evaluate this, we create a breath detector and leverage this against a custom dataset of online news article audio to discriminate between real/deepfake speech. Additionally, we make this custom dataset publicly available to facilitate comparison for future work. Applying our simple breath detector as a deepfake speech discriminator on in-the-wild samples allows for accurate classification (perfect 1.0 AUPRC and 0.0 EER on test data) across 33.6 hours of audio. We compare our model with the state-of-the-art SSL-wav2vec model and show that this complex deep learning model completely fails to classify the same in-the-wild samples (0.72 AUPRC and 0.99 EER).
Abstract (translated)
Deepfake speech represents a real and growing threat to systems and society.为应对深度伪造语音,已经创建了许多检测器。尽管这些检测器采用了多种方法,但许多检测器依赖低级别语音生成过程的片段。我们假设呼吸(语音的较高层次)是自然语音的重要组成部分,因此深度伪造语音的不正确生成是一个表现性的区分器。为了评估这一点,我们创建了一个呼吸检测器,并将其应用于一个在线新闻文章音频自定义数据集,以区分真实/深度伪造语音。此外,我们还公开了这个自定义数据集,以便未来工作的比较。在野外样本上应用我们简单的呼吸检测器作为深度伪造语音区分器,能够实现93.6小时的准确分类(在测试数据上的完美1.0 AUPRC和0.0 EER)。我们将我们的模型与最先进的SSL-wav2vec模型进行比较,并展示了这个复杂的深度学习模型完全无法正确分类相同野外的样本(0.72 AUPRC和0.99 EER)。
URL
https://arxiv.org/abs/2404.15143