Paper Reading AI Learner

Multi-document Summarization through Multi-document Event Relation Graph Reasoning in LLMs: a case study in Framing Bias Mitigation

2025-06-15 22:14:59
Yuanyuan Lei, Ruihong Huang

Abstract

Media outlets are becoming more partisan and polarized nowadays. Most previous work focused on detecting media bias. In this paper, we aim to mitigate media bias by generating a neutralized summary given multiple articles presenting different ideological views. Motivated by the critical role of events and event relations in media bias detection, we propose to increase awareness of bias in LLMs via multi-document events reasoning and use a multi-document event relation graph to guide the summarization process. This graph contains rich event information useful to reveal bias: four common types of in-doc event relations to reflect content framing bias, cross-doc event coreference relation to reveal content selection bias, and event-level moral opinions to highlight opinionated framing bias. We further develop two strategies to incorporate the multi-document event relation graph for neutralized summarization. Firstly, we convert a graph into natural language descriptions and feed the textualized graph into LLMs as a part of a hard text prompt. Secondly, we encode the graph with graph attention network and insert the graph embedding into LLMs as a soft prompt. Both automatic evaluation and human evaluation confirm that our approach effectively mitigates both lexical and informational media bias, and meanwhile improves content preservation.

Abstract (translated)

如今,媒体机构正变得越来越党派化和两极分化。以往的大多数研究主要集中在检测媒体偏见上。在本文中,我们的目标是通过生成中立摘要来缓解媒体偏见,该摘要基于呈现不同意识形态观点的多篇文章。鉴于事件及其关系在媒体偏见检测中的关键作用,我们提出增加大型语言模型(LLM)对偏见的认识,方法是进行跨文档的事件推理,并使用一个多文档事件关系图来指导总结过程。此图表包含丰富的事件信息,有助于揭示偏见:四种常见的文档内事件关系类型以反映内容框架偏差;跨文档事件共指关系以揭示内容选择偏差;以及在道德观点上的意见化框架偏差。 我们进一步开发了两种策略将多文档事件关系图融入中立总结: 首先,我们将图表转换为自然语言描述,并将其作为硬文本提示的一部分提供给LLM。 其次,我们使用图形注意力网络对图表进行编码,并插入到LLM中作为软提示。 自动评估和人工评估均证实我们的方法能够有效缓解词汇和信息层面的媒体偏见,同时提升内容保留度。

URL

https://arxiv.org/abs/2506.12978

PDF

https://arxiv.org/pdf/2506.12978.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot