Abstract
As the bias issue is being taken more and more seriously in widely applied machine learning systems, the decrease in accuracy in most cases deeply disturbs researchers when increasing fairness. To address this problem, we present a novel analysis of the expected fairness quality via weighted vote, suitable for both binary and multi-class classification. The analysis takes the correction of biased predictions by ensemble members into account and provides learning bounds that are amenable to efficient minimisation. We further propose a pruning method based on this analysis and the concepts of domination and Pareto optimality, which is able to increase fairness under a prerequisite of little or even no accuracy decline. The experimental results indicate that the proposed learning bounds are faithful and that the proposed pruning method can indeed increase ensemble fairness without much accuracy degradation.
Abstract (translated)
bias问题在广泛应用的机器学习系统中越来越受到重视,当增加公平性时,准确率通常大幅下降,解决这个问题我们需要提出一种新的分析方法,通过加权投票的方式,适用于二进制和多class分类。该分析考虑了群体成员的偏见预测修正,并提供了易于高效最小化的学习边界。我们还提出了基于该分析和主导性和帕累托最优性的剪枝方法,可以在 little或甚至没有准确率下降的前提下增加公平性。实验结果表明,提出的学习边界是准确的,而提出的剪枝方法可以确实增加群体公平性而不必大量降低准确率。
URL
https://arxiv.org/abs/2301.10813