Abstract
A major problem with using automated classification systems is that if they are not engineered correctly and with fairness considerations, they could be detrimental to certain populations. Furthermore, while engineers have developed cutting-edge technologies for image classification, there is still a gap in the application of these models in human heritage collections, where data sets usually consist of low-quality pictures of people with diverse ethnicity, gender, and age. In this work, we evaluate three bias mitigation techniques using two state-of-the-art neural networks, Xception and EfficientNet, for gender classification. Moreover, we explore the use of transfer learning using a fair data set to overcome the training data scarcity. We evaluated the effectiveness of the bias mitigation pipeline on a cultural heritage collection of photographs from the 19th and 20th centuries, and we used the FairFace data set for the transfer learning experiments. After the evaluation, we found that transfer learning is a good technique that allows better performance when working with a small data set. Moreover, the fairest classifier was found to be accomplished using transfer learning, threshold change, re-weighting and image augmentation as bias mitigation methods.
Abstract (translated)
使用自动化分类系统的主要问题是,如果工程师没有正确地设计和考虑公平性,这些系统可能对某些人口产生不利影响。此外,虽然工程师已经开发了图像分类的前沿技术,但在人类文化遗产收藏中的应用仍然存在差距,数据集通常包含不同种族、性别和年龄的高质量照片。在这项工作中,我们使用了两个先进的神经网络Xception和EfficientNet,对性别分类进行评估,并探索使用公平数据集克服训练数据稀缺的方法。我们评估了 bias mitigation pipeline 对19和20世纪照片文化遗产收藏的 effectiveness,并使用公平Face数据集进行转移学习实验。评估后,我们发现转移学习是一种好的方法,可以在处理小型数据集时提供更好的表现。此外,我们发现最公平的分类器使用转移学习、阈值变化、重新加权和图像增强作为 bias mitigation方法。
URL
https://arxiv.org/abs/2303.11449