Paper Reading AI Learner

The Pursuit of Fairness in Artificial Intelligence Models: A Survey

2024-03-26 02:33:36
Tahsin Alamgir Kheya, Mohamed Reda Bouadjenek, Sunil Aryal

Abstract

Artificial Intelligence (AI) models are now being utilized in all facets of our lives such as healthcare, education and employment. Since they are used in numerous sensitive environments and make decisions that can be life altering, potential biased outcomes are a pressing matter. Developers should ensure that such models don't manifest any unexpected discriminatory practices like partiality for certain genders, ethnicities or disabled people. With the ubiquitous dissemination of AI systems, researchers and practitioners are becoming more aware of unfair models and are bound to mitigate bias in them. Significant research has been conducted in addressing such issues to ensure models don't intentionally or unintentionally perpetuate bias. This survey offers a synopsis of the different ways researchers have promoted fairness in AI systems. We explore the different definitions of fairness existing in the current literature. We create a comprehensive taxonomy by categorizing different types of bias and investigate cases of biased AI in different application domains. A thorough study is conducted of the approaches and techniques employed by researchers to mitigate bias in AI models. Moreover, we also delve into the impact of biased models on user experience and the ethical considerations to contemplate when developing and deploying such models. We hope this survey helps researchers and practitioners understand the intricate details of fairness and bias in AI systems. By sharing this thorough survey, we aim to promote additional discourse in the domain of equitable and responsible AI.

Abstract (translated)

人工智能(AI)模型现在已在我们生活的各个领域得到广泛应用,如医疗、教育和就业。由于这些模型应用于敏感环境中并做出可能改变人生轨迹的决策,潜在的偏见结果成为一个迫切的问题。开发人员应确保这些模型不会表现出任何意外的歧视性行为,如对某些性别、种族或残疾人的偏见。随着AI系统的普遍传播,研究人员和实践者越来越意识到不公正的模型,并承诺减少这些模型中的偏见。为了解决这些问题,已经进行了大量的研究。本调查概述了研究人员如何促进AI系统中的公平性。我们探讨了当前文献中不同形式的公平的定义。我们通过分类不同类型的偏见并研究不同应用领域中的偏见实例,创建了一个全面的分类。对研究人员如何通过减少偏见来优化AI模型进行了深入的研究。此外,我们还深入研究了 biased models 对用户体验的影响以及开发和部署这些模型时需要考虑的伦理问题。我们希望这次调查有助于研究人员和实践者了解AI系统中的公平性和偏见。通过分享这份深入的研究,我们希望促进关于公平和负责任AI领域更多的对话。

URL

https://arxiv.org/abs/2403.17333

PDF

https://arxiv.org/pdf/2403.17333.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot