Abstract
Large Language Models have introduced novel opportunities for text comprehension and generation. Yet, they are vulnerable to adversarial perturbations and data poisoning attacks, particularly in tasks like text classification and translation. However, the adversarial robustness of abstractive text summarization models remains less explored. In this work, we unveil a novel approach by exploiting the inherent lead bias in summarization models, to perform adversarial perturbations. Furthermore, we introduce an innovative application of influence functions, to execute data poisoning, which compromises the model's integrity. This approach not only shows a skew in the models behavior to produce desired outcomes but also shows a new behavioral change, where models under attack tend to generate extractive summaries rather than abstractive summaries.
Abstract (translated)
大型语言模型为文本理解和生成引入了新的机遇。然而,它们容易受到对抗性扰动和数据投毒攻击的影响,特别是在文本分类和翻译等任务中。但是,对于抽象式文本摘要模型的对抗鲁棒性研究仍然较少。在本研究中,我们揭示了一种新颖的方法,通过利用摘要模型内在的引言偏见来进行对抗性扰动。此外,我们引入了影响函数的一种创新应用,以执行数据投毒,从而破坏模型的完整性。这种方法不仅显示出模型行为向生成期望结果倾斜的现象,还展示了新的行为变化,即受到攻击的模型倾向于生成抽取式摘要而非抽象式摘要。
URL
https://arxiv.org/abs/2410.20019