Abstract
It is well-known that the performance of well-trained deep neural networks may degrade significantly when they are applied to data with even slightly shifted distributions. Recent studies have shown that introducing certain perturbation on feature statistics (\eg, mean and standard deviation) during training can enhance the cross-domain generalization ability. Existing methods typically conduct such perturbation by utilizing the feature statistics within a mini-batch, limiting their representation capability. Inspired by the domain generalization objective, we introduce a novel Adversarial Style Augmentation (ASA) method, which explores broader style spaces by generating more effective statistics perturbation via adversarial training. Specifically, we first search for the most sensitive direction and intensity for statistics perturbation by maximizing the task loss. By updating the model against the adversarial statistics perturbation during training, we allow the model to explore the worst-case domain and hence improve its generalization performance. To facilitate the application of ASA, we design a simple yet effective module, namely AdvStyle, which instantiates the ASA method in a plug-and-play manner. We justify the efficacy of AdvStyle on tasks of cross-domain classification and instance retrieval. It achieves higher mean accuracy and lower performance fluctuation. Especially, our method significantly outperforms its competitors on the PACS dataset under the single source generalization setting, \eg, boosting the classification accuracy from 61.2\% to 67.1\% with a ResNet50 backbone. Our code will be available at \url{this https URL}.
Abstract (translated)
训练好的深度学习网络在应用于略微偏移分布的数据时,性能可能会显著下降。最近的研究表明,在训练期间引入某些特征统计方面的扰动(例如均值和标准差)可以增强跨域泛化能力。现有的方法通常通过在小型批次内利用特征统计来进行这样的扰动,限制了其表示能力。基于域泛化目标,我们提出了一种全新的对抗风格增强方法(ASA),通过对抗训练生成更有效的统计扰动来探索更广泛的风格空间。具体来说,我们首先通过最大化任务损失来搜索统计扰动的最敏感方向和强度,并在训练期间更新模型以对抗这些扰动。通过在训练期间更新模型,我们允许模型探索最坏情况下的域,从而提高其泛化性能。为了便于使用ASA,我们设计了一个简单但有效的模块,即AdviseStyle,它可以通过插孔和播放的方式实例化ASA方法。我们证明了AdviseStyle在跨域分类和实例检索任务中的有效性。它实现了更高的平均准确性和更低的性能波动。特别是,我们的方法和其竞争对手在单一源泛化设置下的PACS数据集上显著优于其表现。我们的代码将发表在url{this https URL}上。
URL
https://arxiv.org/abs/2301.12643