Paper Reading AI Learner

Large Language Models are as persuasive as humans, but why? About the cognitive effort and moral-emotional language of LLM arguments

2024-04-14 19:01:20
Carlos Carrasco-Farre

Abstract

Large Language Models (LLMs) are already as persuasive as humans. However, we know very little about why. This paper investigates the persuasion strategies of LLMs, comparing them with human-generated arguments. Using a dataset of 1,251 participants in an experiment, we analyze the persuaion strategies of LLM-generated and human-generated arguments using measures of cognitive effort (lexical and grammatical complexity) and moral-emotional language (sentiment and moral analysis). The study reveals that LLMs produce arguments that require higher cognitive effort, exhibiting more complex grammatical and lexical structures than human counterparts. Additionally, LLMs demonstrate a significant propensity to engage more deeply with moral language, utilizing both positive and negative moral foundations more frequently than humans. In contrast with previous research, no significant difference was found in the emotional content produced by LLMs and humans. These findings contribute to the discourse on AI and persuasion, highlighting the dual potential of LLMs to both enhance and undermine informational integrity through communication strategies for digital persuasion.

Abstract (translated)

大语言模型(LLMs)已经具有与人类相同的说服力。然而,我们对其原因的了解仍然非常有限。本文研究了LLMs的说服策略,将它们与人类生成的论据进行比较。通过一个由1,251名参与者组成的实验的数据集,我们使用认知努力(词汇和语法复杂性)和道德情感语言(情感和道德分析)来分析LLM生成的论据和人类生成的论据。研究发现,LLMs生成的论据需要更高的认知努力,表现出比人类更复杂的词汇和语法结构。此外,LLMs表明更倾向于与道德语言深入互动,比人类更频繁地利用积极和消极道德基础。与之前的研究相比,LLMs和人类在情感内容上没有显著差异。这些发现有助于人们对AI说服力和信息完整性进行讨论,突出了LLMs通过数字说服策略在增强和破坏信息完整性方面的双重潜力。

URL

https://arxiv.org/abs/2404.09329

PDF

https://arxiv.org/pdf/2404.09329.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot