Paper Reading AI Learner

Tackling Social Bias against the Poor: A Dataset and Taxonomy on Aporophobia

2025-04-17 16:53:14
Georgina Curto, Svetlana Kiritchenko, Muhammad Hammad Fahim Siddiqui, Isar Nejadgholi, Kathleen C. Fraser

Abstract

Eradicating poverty is the first goal in the United Nations Sustainable Development Goals. However, aporophobia -- the societal bias against people living in poverty -- constitutes a major obstacle to designing, approving and implementing poverty-mitigation policies. This work presents an initial step towards operationalizing the concept of aporophobia to identify and track harmful beliefs and discriminative actions against poor people on social media. In close collaboration with non-profits and governmental organizations, we conduct data collection and exploration. Then we manually annotate a corpus of English tweets from five world regions for the presence of (1) direct expressions of aporophobia, and (2) statements referring to or criticizing aporophobic views or actions of others, to comprehensively characterize the social media discourse related to bias and discrimination against the poor. Based on the annotated data, we devise a taxonomy of categories of aporophobic attitudes and actions expressed through speech on social media. Finally, we train several classifiers and identify the main challenges for automatic detection of aporophobia in social networks. This work paves the way towards identifying, tracking, and mitigating aporophobic views on social media at scale.

Abstract (translated)

消除贫困是联合国可持续发展目标中的首要目标。然而,对生活在贫困中的人的偏见——即阿波霍菲亚(aporophobia)——构成了设计、批准和实施减贫政策的主要障碍。这项工作旨在通过操作化理解阿波霍菲亚概念来识别并追踪社会媒体上针对穷人的有害信念和歧视性行为。在与非政府组织和政府部门密切合作的情况下,我们进行数据收集和探索。然后,我们手动标注来自全球五个地区的英语推文语料库,以确定其中是否存在(1)直接表达的阿波霍菲亚言论,以及(2)针对或批评他人阿波霍菲亚观点或行为的声明,从而全面描述与对穷人的偏见和歧视相关的社交媒体讨论。基于这些标注的数据,我们设计了一个分类体系,涵盖了在社交网络上通过言语表达的阿波霍菲亚态度和行动类别。最后,我们训练了几种分类器,并识别了自动检测社交网络中阿波霍菲亚的主要挑战。这项工作为大规模地识别、追踪并缓解社交媒体上的阿波霍菲亚观点铺平道路。

URL

https://arxiv.org/abs/2504.13085

PDF

https://arxiv.org/pdf/2504.13085.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot