Abstract
The prevalence of unwarranted beliefs, spanning pseudoscience, logical fallacies, and conspiracy theories, presents substantial societal hurdles and the risk of disseminating misinformation. Utilizing established psychometric assessments, this study explores the capabilities of large language models (LLMs) vis-a-vis the average human in detecting prevalent logical pitfalls. We undertake a philosophical inquiry, juxtaposing the rationality of humans against that of LLMs. Furthermore, we propose methodologies for harnessing LLMs to counter misconceptions, drawing upon psychological models of persuasion such as cognitive dissonance theory and elaboration likelihood theory. Through this endeavor, we highlight the potential of LLMs as personalized misinformation debunking agents.
Abstract (translated)
不正当信念的普遍存在,从伪科学、逻辑谬误和阴谋论到,给社会带来了巨大的障碍,并可能传播错误信息。运用已有的心理测量法,本研究探讨了大语言模型(LLMs)与平均人类在发现普遍逻辑陷阱方面的能力。我们进行了一项哲学探讨,将人类理性的边界与LLMs的理性相对照。此外,我们还提出了利用LLMs消除误解的方法,借鉴了说服力心理模型,如认知失调理论和阐述可能性理论。通过这项工作,我们突出了LLMs作为个性化错误信息反驳工具的潜力。
URL
https://arxiv.org/abs/2405.00843