Abstract
The diversity of knowledge encoded in large language models (LLMs) and their ability to apply this knowledge zero-shot in a range of settings makes them a promising candidate for use in decision-making. However, they are currently limited by their inability to reliably provide outputs which are explainable and contestable. In this paper, we attempt to reconcile these strengths and weaknesses by introducing a method for supplementing LLMs with argumentative reasoning. Concretely, we introduce argumentative LLMs, a method utilising LLMs to construct argumentation frameworks, which then serve as the basis for formal reasoning in decision-making. The interpretable nature of these argumentation frameworks and formal reasoning means that any decision made by the supplemented LLM may be naturally explained to, and contested by, humans. We demonstrate the effectiveness of argumentative LLMs experimentally in the decision-making task of claim verification. We obtain results that are competitive with, and in some cases surpass, comparable state-of-the-art techniques.
Abstract (translated)
大型语言模型(LLMs)所编码的知识的多样性以及它们在各种场景下应用此知识的零样本使它们成为决策制定的有前途的候选者。然而,它们目前仍然受到其无法可靠提供可解释、可争议输出的限制。在本文中,我们试图通过引入一种补充LLM的推理方法来调和这些优势和劣势。具体来说,我们引入了用于构建论证框架的LLM,该框架作为决策制定的正式推理基础。这些论证框架的可解释性质和形式推理使得由补充LLM做出的任何决策都可以自然地解释给,以及由人类进行反驳。我们通过实验验证了用于声称验证的推理LLM的有效性。我们获得了与 comparable state-of-the-art技术竞争的结果,并且在某些情况下超过了这些技术的性能。
URL
https://arxiv.org/abs/2405.02079