Abstract
Data augmentation plays a pivotal role in enhancing and diversifying training data. Nonetheless, consistently improving model performance in varied learning scenarios, especially those with inherent data biases, remains challenging. To address this, we propose to augment the deep features of samples by incorporating their adversarial and anti-adversarial perturbation distributions, enabling adaptive adjustment in the learning difficulty tailored to each sample's specific characteristics. We then theoretically reveal that our augmentation process approximates the optimization of a surrogate loss function as the number of augmented copies increases indefinitely. This insight leads us to develop a meta-learning-based framework for optimizing classifiers with this novel loss, introducing the effects of augmentation while bypassing the explicit augmentation process. We conduct extensive experiments across four common biased learning scenarios: long-tail learning, generalized long-tail learning, noisy label learning, and subpopulation shift learning. The empirical results demonstrate that our method consistently achieves state-of-the-art performance, highlighting its broad adaptability.
Abstract (translated)
数据增强在增强和丰富训练数据方面发挥着关键作用。然而,在各种学习场景中,特别是在具有固有数据偏见的学习场景中,持续提高模型性能仍然具有挑战性。为解决这个问题,我们提出了一种通过引入样本的对抗和抗对抗扰扰分布来增强其深度特征的方法,使得学习难度能针对每个样本的特定特征进行定制调整。然后我们理论性地揭示了,随着 augmented copy 数量的无限增加,优化 surrogate 损失函数的过程近似于无限接近。这个启示使我们开发了一个基于元学习优化类分类器的框架,在忽略显式增强过程的同时引入增强效果。我们在一个常见的偏见学习场景进行广泛的实验:长尾学习、一般长尾学习、噪音标签学习和亚聚类转移学习。实证结果表明,我们的方法 consistently实现了最先进的表现,突出了其广泛的适应性。
URL
https://arxiv.org/abs/2404.16307