Abstract
Media outlets are becoming more partisan and polarized nowadays. Most previous work focused on detecting media bias. In this paper, we aim to mitigate media bias by generating a neutralized summary given multiple articles presenting different ideological views. Motivated by the critical role of events and event relations in media bias detection, we propose to increase awareness of bias in LLMs via multi-document events reasoning and use a multi-document event relation graph to guide the summarization process. This graph contains rich event information useful to reveal bias: four common types of in-doc event relations to reflect content framing bias, cross-doc event coreference relation to reveal content selection bias, and event-level moral opinions to highlight opinionated framing bias. We further develop two strategies to incorporate the multi-document event relation graph for neutralized summarization. Firstly, we convert a graph into natural language descriptions and feed the textualized graph into LLMs as a part of a hard text prompt. Secondly, we encode the graph with graph attention network and insert the graph embedding into LLMs as a soft prompt. Both automatic evaluation and human evaluation confirm that our approach effectively mitigates both lexical and informational media bias, and meanwhile improves content preservation.
Abstract (translated)
如今,媒体机构正变得越来越党派化和两极分化。以往的大多数研究主要集中在检测媒体偏见上。在本文中,我们的目标是通过生成中立摘要来缓解媒体偏见,该摘要基于呈现不同意识形态观点的多篇文章。鉴于事件及其关系在媒体偏见检测中的关键作用,我们提出增加大型语言模型(LLM)对偏见的认识,方法是进行跨文档的事件推理,并使用一个多文档事件关系图来指导总结过程。此图表包含丰富的事件信息,有助于揭示偏见:四种常见的文档内事件关系类型以反映内容框架偏差;跨文档事件共指关系以揭示内容选择偏差;以及在道德观点上的意见化框架偏差。 我们进一步开发了两种策略将多文档事件关系图融入中立总结: 首先,我们将图表转换为自然语言描述,并将其作为硬文本提示的一部分提供给LLM。 其次,我们使用图形注意力网络对图表进行编码,并插入到LLM中作为软提示。 自动评估和人工评估均证实我们的方法能够有效缓解词汇和信息层面的媒体偏见,同时提升内容保留度。
URL
https://arxiv.org/abs/2506.12978