Abstract
Recent works in relation extraction (RE) have achieved promising benchmark accuracy; however, our adversarial attack experiments show that these works excessively rely on entities, making their generalization capability questionable. To address this issue, we propose an adversarial training method specifically designed for RE. Our approach introduces both sequence- and token-level perturbations to the sample and uses a separate perturbation vocabulary to improve the search for entity and context perturbations. Furthermore, we introduce a probabilistic strategy for leaving clean tokens in the context during adversarial training. This strategy enables a larger attack budget for entities and coaxes the model to leverage relational patterns embedded in the context. Extensive experiments show that compared to various adversarial training methods, our method significantly improves both the accuracy and robustness of the model. Additionally, experiments on different data availability settings highlight the effectiveness of our method in low-resource scenarios. We also perform in-depth analyses of our proposed method and provide further hints. We will release our code at this https URL.
Abstract (translated)
近年来,在关系抽取(RE)领域取得了一些的有希望的基准准确度;然而,我们的攻击实验表明,这些工作过度依赖实体,导致其泛化能力值得怀疑。为了解决这个问题,我们提出了一个专门针对RE的攻击训练方法。我们的方法引入了样本级和标记级扰动,并使用了一个单独的扰动词汇表来提高对实体和上下文扰动的搜索。此外,我们还引入了一种概率策略,让其在上下文训练过程中留下干净的标记。这种策略使得实体和上下文中的关系模式得到更充分的利用。大量实验证明,与各种攻击训练方法相比,我们的方法显著提高了模型的准确性和鲁棒性。此外,在不同数据可用性设置的实验中,我们的方法在低资源场景中表现出有效的效果。我们还对我们所提出的方法进行了深入的分析,并提供了进一步的提示。我们将发布我们的代码在這個 URL 上。
URL
https://arxiv.org/abs/2404.02931