Abstract
Artificial Intelligence (AI) models are now being utilized in all facets of our lives such as healthcare, education and employment. Since they are used in numerous sensitive environments and make decisions that can be life altering, potential biased outcomes are a pressing matter. Developers should ensure that such models don't manifest any unexpected discriminatory practices like partiality for certain genders, ethnicities or disabled people. With the ubiquitous dissemination of AI systems, researchers and practitioners are becoming more aware of unfair models and are bound to mitigate bias in them. Significant research has been conducted in addressing such issues to ensure models don't intentionally or unintentionally perpetuate bias. This survey offers a synopsis of the different ways researchers have promoted fairness in AI systems. We explore the different definitions of fairness existing in the current literature. We create a comprehensive taxonomy by categorizing different types of bias and investigate cases of biased AI in different application domains. A thorough study is conducted of the approaches and techniques employed by researchers to mitigate bias in AI models. Moreover, we also delve into the impact of biased models on user experience and the ethical considerations to contemplate when developing and deploying such models. We hope this survey helps researchers and practitioners understand the intricate details of fairness and bias in AI systems. By sharing this thorough survey, we aim to promote additional discourse in the domain of equitable and responsible AI.
Abstract (translated)
人工智能(AI)模型现在已在我们生活的各个领域得到广泛应用,如医疗、教育和就业。由于这些模型应用于敏感环境中并做出可能改变人生轨迹的决策,潜在的偏见结果成为一个迫切的问题。开发人员应确保这些模型不会表现出任何意外的歧视性行为,如对某些性别、种族或残疾人的偏见。随着AI系统的普遍传播,研究人员和实践者越来越意识到不公正的模型,并承诺减少这些模型中的偏见。为了解决这些问题,已经进行了大量的研究。本调查概述了研究人员如何促进AI系统中的公平性。我们探讨了当前文献中不同形式的公平的定义。我们通过分类不同类型的偏见并研究不同应用领域中的偏见实例,创建了一个全面的分类。对研究人员如何通过减少偏见来优化AI模型进行了深入的研究。此外,我们还深入研究了 biased models 对用户体验的影响以及开发和部署这些模型时需要考虑的伦理问题。我们希望这次调查有助于研究人员和实践者了解AI系统中的公平性和偏见。通过分享这份深入的研究,我们希望促进关于公平和负责任AI领域更多的对话。
URL
https://arxiv.org/abs/2403.17333