Abstract
Learning models that are robust to test-time distribution shifts is a key concern in domain generalization, and in the wider context of their real-life applicability. Invariant Risk Minimization (IRM) is one particular framework that aims to learn deep invariant features from multiple domains and has subsequently led to further variants. A key assumption for the success of these methods requires that the underlying causal mechanisms/features remain invariant across domains and the true invariant features be sufficient to learn the optimal predictor. In practical problem settings, these assumptions are often not satisfied, which leads to IRM learning a sub-optimal predictor for that task. In this work, we propose the notion of partial invariance as a relaxation of the IRM framework. Under our problem setting, we first highlight the sub-optimality of the IRM solution. We then demonstrate how partitioning the training domains, assuming access to some meta-information about the domains, can help improve the performance of invariant models via partial invariance. Finally, we conduct several experiments, both in linear settings as well as with classification tasks in language and images with deep models, which verify our conclusions.
Abstract (translated)
能够在测试时分布 Shift 稳定的学习模型是域泛化的关键关注点,以及其实际应用场景的更广泛的背景下。不变风险最小化(IRM)是一个特定框架,旨在从多个域中学习深度不变的特征,并随后导致了进一步的变化。这些方法的成功的一个关键假设是:不同域之间的基本因果关系机制/特征是不变的,真正的不变特征足够学习最佳的预测器。在 practical 问题设置中,这些假设往往无法满足,这导致 IRM 学习该任务的次优预测器。在本工作中,我们提出了 partial invariance 作为 IRM 框架的放松。在我们的问题设置中,我们首先突出了 IRM 解决方案的次优性。然后我们演示了如何分区训练域,假设获得了有关域的某种 meta-信息,通过 partial invariance 来改善不变模型的性能。最后,我们进行了 several 实验,包括线性设置和使用深度模型的语言和图像分类任务,这些实验证实了我们的结论。
URL
https://arxiv.org/abs/2301.12067