Abstract
Machine learning models pre-trained on large datasets have achieved remarkable convergence and robustness properties. However, these models often exploit spurious correlations between certain attributes and labels, which are prevalent in the majority of examples within specific categories but are not predictive of these categories in general. The learned spurious correlations may persist even after fine-tuning on new data, which degrades models' performance on examples that do not exhibit the spurious correlation. In this work, we propose a simple and highly effective method to eliminate spurious correlations from pre-trained models. The key idea of our method is to leverage a small set of examples with spurious attributes, and balance the spurious attributes across all classes via data mixing. We theoretically confirm the effectiveness of our method, and empirically demonstrate its state-of-the-art performance on various vision and NLP tasks, including eliminating spurious correlations from pre-trained ResNet50 on Waterbirds and CelebA, adversarially pre-trained ResNet50 on ImageNet, and BERT pre-trained on CivilComments.
Abstract (translated)
在这项工作中,我们提出了一种简单而高效的方法,用于从大型数据集上训练的机器学习模型中的伪相关性消除。该方法的关键思想是利用伪属性的一些示例,通过数据混合平衡所有类别的伪属性。我们理论证明了该方法的有效性,并Empirically证明了其在多种视觉和自然语言处理任务中的先进性能,包括从对鸟类和名人等类别的预训练 ResNet50 中消除伪相关性,以及在 ImageNet 上的对抗预训练 ResNet50 和预训练在 CivilComments 上的BERT。
URL
https://arxiv.org/abs/2305.14521