Abstract
Deep learning models often struggle to maintain performance when deployed on data distributions different from their training data, particularly in real-world applications where environmental conditions frequently change. While Multi-source Domain Generalization (MDG) has shown promise in addressing this challenge by leveraging multiple source domains during training, its practical application is limited by the significant costs and difficulties associated with creating multi-domain datasets. To address this limitation, we propose Pseudo Multi-source Domain Generalization (PMDG), a novel framework that enables the application of sophisticated MDG algorithms in more practical Single-source Domain Generalization (SDG) settings. PMDG generates multiple pseudo-domains from a single source domain through style transfer and data augmentation techniques, creating a synthetic multi-domain dataset that can be used with existing MDG algorithms. Through extensive experiments with PseudoDomainBed, our modified version of the DomainBed benchmark, we analyze the effectiveness of PMDG across multiple datasets and architectures. Our analysis reveals several key findings, including a positive correlation between MDG and PMDG performance and the potential of pseudo-domains to match or exceed actual multi-domain performance with sufficient data. These comprehensive empirical results provide valuable insights for future research in domain generalization. Our code is available at this https URL.
Abstract (translated)
深度学习模型在部署到与训练数据分布不同的数据集时,往往难以保持性能,尤其是在环境条件经常变化的实际应用中。虽然多源领域泛化(MDG)通过利用多个来源领域的数据进行训练,在应对这一挑战方面显示出潜力,但其实际应用受到创建多领域数据集所需的巨大成本和困难的限制。为了解决这一局限性,我们提出了伪多源领域泛化(PMDG),这是一种新的框架,它使复杂的MDG算法能够应用于更具实用性的单一来源领域泛化(SDG)场景中。 PMDG通过使用风格迁移和数据增强技术从单个来源域生成多个伪域,创建了一个可以与现有MDG算法一起使用的合成多领域数据集。我们通过在修改后的DomainBed基准测试的版本PseudoDomainBed上进行广泛的实验,分析了PMDG在多种数据集和架构上的有效性。我们的分析揭示了几项关键发现,包括MDG和PMDG性能之间的正相关关系,以及伪域具有与实际多领域表现相匹配甚至超越的能力,只要数据充足。 这些全面的经验结果为未来领域的泛化研究提供了宝贵的见解。我们的代码可在以下网址获得:[这里插入URL]。
URL
https://arxiv.org/abs/2505.23173