Abstract
Large Language Models (LLMs) are already as persuasive as humans. However, we know very little about why. This paper investigates the persuasion strategies of LLMs, comparing them with human-generated arguments. Using a dataset of 1,251 participants in an experiment, we analyze the persuaion strategies of LLM-generated and human-generated arguments using measures of cognitive effort (lexical and grammatical complexity) and moral-emotional language (sentiment and moral analysis). The study reveals that LLMs produce arguments that require higher cognitive effort, exhibiting more complex grammatical and lexical structures than human counterparts. Additionally, LLMs demonstrate a significant propensity to engage more deeply with moral language, utilizing both positive and negative moral foundations more frequently than humans. In contrast with previous research, no significant difference was found in the emotional content produced by LLMs and humans. These findings contribute to the discourse on AI and persuasion, highlighting the dual potential of LLMs to both enhance and undermine informational integrity through communication strategies for digital persuasion.
Abstract (translated)
大语言模型(LLMs)已经具有与人类相同的说服力。然而,我们对其原因的了解仍然非常有限。本文研究了LLMs的说服策略,将它们与人类生成的论据进行比较。通过一个由1,251名参与者组成的实验的数据集,我们使用认知努力(词汇和语法复杂性)和道德情感语言(情感和道德分析)来分析LLM生成的论据和人类生成的论据。研究发现,LLMs生成的论据需要更高的认知努力,表现出比人类更复杂的词汇和语法结构。此外,LLMs表明更倾向于与道德语言深入互动,比人类更频繁地利用积极和消极道德基础。与之前的研究相比,LLMs和人类在情感内容上没有显著差异。这些发现有助于人们对AI说服力和信息完整性进行讨论,突出了LLMs通过数字说服策略在增强和破坏信息完整性方面的双重潜力。
URL
https://arxiv.org/abs/2404.09329