Abstract
Modern businesses are increasingly challenged by the time and expense required to generate and assess high-quality content. Human writers face time constraints, and extrinsic evaluations can be costly. While Large Language Models (LLMs) offer potential in content creation, concerns about the quality of AI-generated content persist. Traditional evaluation methods, like human surveys, further add operational costs, highlighting the need for efficient, automated solutions. This research introduces Generative Agents as a means to tackle these challenges. These agents can rapidly and cost-effectively evaluate AI-generated content, simulating human judgment by rating aspects such as coherence, interestingness, clarity, fairness, and relevance. By incorporating these agents, businesses can streamline content generation and ensure consistent, high-quality output while minimizing reliance on costly human evaluations. The study provides critical insights into enhancing LLMs for producing business-aligned, high-quality content, offering significant advancements in automated content generation and evaluation.
Abstract (translated)
现代企业面临着生成和评估高质量内容所需的时间和成本的挑战。人类作者面临时间限制,而传统的外在评价方法(如人工调查)又会增加运营成本。虽然大型语言模型(LLMs)在内容创作方面展现出潜力,但人们对AI生成的内容质量仍存疑虑。传统评估方法进一步增加了操作成本,凸显了开发高效、自动化解决方案的必要性。本研究引入了“生成代理”作为应对这些挑战的一种手段。这些代理能够快速且低成本地评估AI生成的内容,并通过评分连贯性、趣味性、清晰度、公正性和相关性等维度来模拟人类判断。通过整合这些代理,企业可以简化内容生成流程,确保持续产出高质量的内容同时减少对昂贵的人力评价的依赖。该研究为提升LLMs以生产符合业务需求且高品质的内容提供了关键见解,并在自动化内容生成和评估方面取得了重要进展。
URL
https://arxiv.org/abs/2512.08273