Abstract
Social media platforms such as Twitter, Reddit, and Sina Weibo play a crucial role in global communication but often encounter strict regulations in geopolitically sensitive regions. This situation has prompted users to ingeniously modify their way of communicating, frequently resorting to coded language in these regulated social media environments. This shift in communication is not merely a strategy to counteract regulation, but a vivid manifestation of language evolution, demonstrating how language naturally evolves under societal and technological pressures. Studying the evolution of language in regulated social media contexts is of significant importance for ensuring freedom of speech, optimizing content moderation, and advancing linguistic research. This paper proposes a multi-agent simulation framework using Large Language Models (LLMs) to explore the evolution of user language in regulated social media environments. The framework employs LLM-driven agents: supervisory agent who enforce dialogue supervision and participant agents who evolve their language strategies while engaging in conversation, simulating the evolution of communication styles under strict regulations aimed at evading social media regulation. The study evaluates the framework's effectiveness through a range of scenarios from abstract scenarios to real-world situations. Key findings indicate that LLMs are capable of simulating nuanced language dynamics and interactions in constrained settings, showing improvement in both evading supervision and information accuracy as evolution progresses. Furthermore, it was found that LLM agents adopt different strategies for different scenarios.
Abstract (translated)
社交媒体平台如Twitter、Reddit和新浪微博在全球交流中扮演着关键角色,但通常会在敏感的地缘政治地区遇到严格的监管。这种情况促使用户巧妙地修改他们的交流方式,经常在受监管的社交媒体环境中使用暗语。这种交流方式的转变不仅是对抗监管的一种策略,更是语言进化的生动表现,展示了在社会和技术压力下语言的自然演变。研究在受监管的社交媒体环境中语言演变的重要性对于确保言论自由、优化内容监管和推动语言研究具有重大意义。本文提出了一种使用大型语言模型(LLMs)的多代理仿真框架,探讨受监管社交媒体环境中用户语言的演变。框架采用LLM驱动的代理:监督代理负责对话监督,参与者代理在参与对话的过程中发展他们的语言策略,模拟在严格监管下避免社交媒体管理策略的演变。研究通过从抽象场景到现实世界情况的各种场景对框架的有效性进行评估。关键发现表明,LLMs能够模拟受约束环境中的细微语言动态和互动,随着进度的提高,逃避监督和信息准确性的能力都有所改善。此外,发现LLM代理采用不同的策略来应对不同的场景。
URL
https://arxiv.org/abs/2405.02858