Abstract
With the advent of deep learning methods, Neural Machine Translation (NMT) systems have become increasingly powerful. However, deep learning based systems are susceptible to adversarial attacks, where imperceptible changes to the input can cause undesirable changes at the output of the system. To date there has been little work investigating adversarial attacks on sequence-to-sequence systems, such as NMT models. Previous work in NMT has examined attacks with the aim of introducing target phrases in the output sequence. In this work, adversarial attacks for NMT systems are explored from an output perception perspective. Thus the aim of an attack is to change the perception of the output sequence, without altering the perception of the input sequence. For example, an adversary may distort the sentiment of translated reviews to have an exaggerated positive sentiment. In practice it is challenging to run extensive human perception experiments, so a proxy deep-learning classifier applied to the NMT output is used to measure perception changes. Experiments demonstrate that the sentiment perception of NMT systems' output sequences can be changed significantly.
Abstract (translated)
随着深度学习方法的出现,神经网络机器翻译(NMT)系统变得越来越强大。然而,基于深度学习的系统容易受到对抗攻击,即输入的微变化可能会导致系统输出的不理想变化。迄今为止,对序列到序列系统(如NMT模型)的对抗攻击研究较少。以前的NMT研究已检查了攻击,旨在在输出序列中引入目标短语。在这个研究中,对NMT系统的对抗攻击从输出感知的角度进行了探索。因此,攻击的目标是改变输出序列的感知,而不改变输入序列的感知。例如,对抗者可能会扭曲翻译评论的情感,使其表现出过度积极的情感。在实践中,进行广泛的人类感知实验很困难,因此对NMT输出的应用 proxy 深度学习分类器来测量感知变化。实验表明,NMT系统输出序列的情感感知可以发生变化。
URL
https://arxiv.org/abs/2305.01437