Abstract
In various real-world applications such as machine translation, sentiment analysis, and question answering, a pivotal role is played by NLP models, facilitating efficient communication and decision-making processes in domains ranging from healthcare to finance. However, a significant challenge is posed to the robustness of these natural language processing models by text adversarial attacks. These attacks involve the deliberate manipulation of input text to mislead the predictions of the model while maintaining human interpretability. Despite the remarkable performance achieved by state-of-the-art models like BERT in various natural language processing tasks, they are found to remain vulnerable to adversarial perturbations in the input text. In addressing the vulnerability of text classifiers to adversarial attacks, three distinct attack mechanisms are explored in this paper using the victim model BERT: BERT-on-BERT attack, PWWS attack, and Fraud Bargain's Attack (FBA). Leveraging the IMDB, AG News, and SST2 datasets, a thorough comparative analysis is conducted to assess the effectiveness of these attacks on the BERT classifier model. It is revealed by the analysis that PWWS emerges as the most potent adversary, consistently outperforming other methods across multiple evaluation scenarios, thereby emphasizing its efficacy in generating adversarial examples for text classification. Through comprehensive experimentation, the performance of these attacks is assessed and the findings indicate that the PWWS attack outperforms others, demonstrating lower runtime, higher accuracy, and favorable semantic similarity scores. The key insight of this paper lies in the assessment of the relative performances of three prevalent state-of-the-art attack mechanisms.
Abstract (translated)
在各种现实应用中,如机器翻译、情感分析和问答系统,自然语言处理(NLP)模型在促进高效沟通和决策过程中发挥了关键作用,如医疗保健和金融领域。然而,自然语言处理模型面临着文本对抗攻击的重大挑战。这些攻击涉及对输入文本的故意操纵,以误导模型预测的同时保持人类可解释性。尽管在自然语言处理任务中取得了令人印象深刻的性能,如BERT等最先进的模型,但它们在输入文本上仍然容易受到对抗性扰动。为解决文本分类器对对抗性攻击的脆弱性,本文使用受害者模型BERT探讨了三种攻击机制:BERT-on-BERT攻击、PWWS攻击和Fraud Bargain's Attack(FBA)。利用IMDB、AG新闻和SST2数据集,对这些攻击对BERT分类器模型的效果进行了全面比较分析。分析发现,PWWS攻击成为最强大的对抗者,在多个评估场景中始终优于其他方法,从而突出了它在生成文本分类器对抗性样本方面的有效性。通过全面的实验,评估了这些攻击的表现,研究结果表明,PWWS攻击胜过其他攻击,证明了较低的运行时间、更高的准确性和有利的语义相似性分数。本文的关键在于对三种最流行攻击机制相对性能的评估。
URL
https://arxiv.org/abs/2404.05159