Abstract
We evaluate five English NLP benchmark datasets (available on the superGLUE leaderboard) for bias, along multiple axes. The datasets are the following: Boolean Question (Boolq), CommitmentBank (CB), Winograd Schema Challenge (WSC), Winogender diagnostic (AXg), and Recognising Textual Entailment (RTE). Bias can be harmful and it is known to be common in data, which ML models learn from. In order to mitigate bias in data, it is crucial to be able to estimate it objectively. We use bipol, a novel multi-axes bias metric with explainability, to quantify and explain how much bias exists in these datasets. Multilingual, multi-axes bias evaluation is not very common. Hence, we also contribute a new, large labelled Swedish bias-detection dataset, with about 2 million samples; translated from the English version. In addition, we contribute new multi-axes lexica for bias detection in Swedish. We train a SotA model on the new dataset for bias detection. We make the codes, model, and new dataset publicly available.
Abstract (translated)
我们评估了五个英语自然语言处理基准数据集(可在superGLUE leaderboard上查看),并对这些数据集进行了多轴偏见评估。这些数据集包括:布尔问题(Boolq)、承诺银行(CB)、WinogradSchema挑战(WSC)、Winogender诊断(AXg)和识别文本依赖(RTE)。偏见可能会带来危害,并且已知数据中偏见是普遍的,机器学习模型从这些数据中学习。为了减轻数据中的偏见,客观评估是至关重要的。我们使用了Bipol,一个具有解释性的多轴偏见度量,来量化并解释这些数据集中存在的多少偏见。由于多语言、多轴偏见评估并不普遍,因此我们也提供了一份新的、大型标注的瑞典偏见检测数据集,大约有200万样本,从英文版本翻译而来。此外,我们还提供了一份新的多轴瑞典语词典,用于偏见检测。我们训练了一个SotA模型,用于偏见检测。我们将代码、模型和新的 dataset 公开发布。
URL
https://arxiv.org/abs/2301.12139