Abstract
With news and information being as easy to access as they currently are, it is more important than ever to ensure that people are not mislead by what they read. Recently, the rise of neural fake news (AI-generated fake news) and its demonstrated effectiveness at fooling humans has prompted the development of models to detect it. One such model is the Grover model, which can both detect neural fake news to prevent it, and generate it to demonstrate how a model could be misused to fool human readers. In this work we explore the Grover model's fake news detection capabilities by performing targeted attacks through perturbations on input news articles. Through this we test Grover's resilience to these adversarial attacks and expose some potential vulnerabilities which should be addressed in further iterations to ensure it can detect all types of fake news accurately.
Abstract (translated)
现如今,新闻和信息的普及程度已经像现在这样容易,更加重要的是确保人们不会被他们阅读的内容所误导。最近,神经网络假新闻(AI生成的假新闻)的崛起以及它对人类欺骗的 demonstrated 有效性 促使了开发模型来检测它的发展。其中一个这样的模型是Grover模型,它能够既检测神经网络假新闻,防止它,也能够生成它,以演示如何使用一个模型来滥用来欺骗人类读者。在这项工作中,我们探索了Grover模型的假新闻检测能力,通过针对输入新闻文章进行针对性的攻击进行攻击。通过这种方式,我们测试了Grover对这些对抗攻击的 resilience,并揭示了一些潜在的脆弱性,应该在未来的迭代中得到解决,以确保它能够准确地检测各种类型的假新闻。
URL
https://arxiv.org/abs/2302.00509