Abstract
Prompt-based methods have gained increasing attention on NLP and shown validity on many downstream tasks. Many works have focused on mining these methods' potential for knowledge extraction, but few explore their ability to make logical reasoning. In this work, we focus on the effectiveness of the prompt-based methods on first-order logical reasoning and find that the bottleneck lies in logical negation. Based on our analysis, logical negation tends to result in spurious correlations to negative answers, while propositions without logical negation correlate to positive answers. To solve the problem, we propose a simple but effective method, Negation Augmenting and Negation Debiasing (NAND), which introduces negative propositions to prompt-based methods without updating parameters. Specifically, these negative propositions can counteract spurious correlations by providing "not" for all instances so that models cannot make decisions only by whether expressions contain a logical negation. Experiments on three datasets show that NAND not only solves the problem of calibrating logical negation but also significantly enhances prompt-based methods of logical reasoning without model retraining.
Abstract (translated)
基于提示的方法在自然语言处理(NLP)领域引起了越来越多的关注,并在许多下游任务上表现出了有效性。许多工作重点关注这些方法的潜在知识提取能力,但很少探讨它们进行逻辑推理的能力。在这篇论文中,我们关注基于提示的方法在第一级逻辑推理上的效果,并发现瓶颈在于逻辑否定。根据我们的分析,逻辑否定往往会导致负回答的伪相关性,而没有逻辑否定的命题则与正回答相关。为了解决这个问题,我们提出了一个简单而有效的方法:否定增强和否定偏差(NAND)。具体来说,这些负命题可以通过为所有实例提供“不是”来对抗伪相关性,使模型不能仅根据表达式中是否包含逻辑否定来做出决策。在三个数据集上的实验表明,NAND不仅解决了调整逻辑否定的问题,而且还显著增强了基于提示的逻辑推理方法,而不需要重新训练模型。
URL
https://arxiv.org/abs/2405.04872