Abstract
With the rapid proliferation of artificial intelligence, there is growing concern over its potential to exacerbate existing biases and societal disparities and introduce novel ones. This issue has prompted widespread attention from academia, policymakers, industry, and civil society. While evidence suggests that integrating human perspectives can mitigate bias-related issues in AI systems, it also introduces challenges associated with cognitive biases inherent in human decision-making. Our research focuses on reviewing existing methodologies and ongoing investigations aimed at understanding annotation attributes that contribute to bias.
Abstract (translated)
随着人工智能的快速发展,人们越来越担心其可能加剧现有的偏见和社会不平等,并引入新的偏见。这个问题已经引起了学术界、政策制定者、产业和民间社会的广泛关注。尽管证据表明,将人类视角融入人工智能系统可以减轻偏见相关问题,但这也带来了与人类决策中固有偏见相关的挑战。我们的研究重点在于回顾现有方法和正在进行的研究,以了解有助于理解注释属性的偏见。
URL
https://arxiv.org/abs/2404.19071