Abstract
Vision-language foundation models like CLIP have shown impressive zero-shot generalization, but finetuning on downstream datasets can cause overfitting and loss of its generalization ability on unseen domains. Although collecting additional data from new domains of interest is possible, this method is often impractical due to the challenges in obtaining annotated data. To address this, we propose a plug-and-play feature augmentation method called LDFS (Language-Guided Diverse Feature Synthesis) to synthesize new domain features and improve existing CLIP fine-tuning strategies. LDFS has three main contributions: 1) To synthesize novel domain features and promote diversity, we propose an instance-conditional feature augmentation strategy based on a textguided feature augmentation loss. 2) To maintain feature quality after augmenting, we introduce a pairwise regularizer to preserve augmented feature coherence within the CLIP feature space. 3) We propose to use stochastic text feature augmentation to reduce the modality gap and further facilitate the process of text-guided feature synthesis. Extensive experiments show LDFS superiority in improving CLIP generalization ability on unseen domains without collecting data from those domains. The code will be made publicly available.
Abstract (translated)
类似于CLIP的视觉语言基础模型已经展示了令人印象深刻的零样本泛化,但在下游数据集上进行微调可能导致过拟合和在未见过的域上失去泛化能力。尽管可以从感兴趣的新领域收集额外数据,但这种方法通常由于获取标注数据的有挑战性而不可行。为了解决这个问题,我们提出了一个插件式特征增强方法,称为LDFS(语言指导的多样特征合成),以合成新的领域特征并改进现有的CLIP微调策略。LDFS有三个主要贡献:1)为了合成新的领域特征并促进多样性,我们提出了一个基于文本指导的特征增强策略。2)为了在微调后保留特征质量,我们引入了一对互斥正则化器来保留CLIP特征空间中增强的特征相关性。3)我们提出使用随机文本特征增强来减少模态差距,并进一步促进文本指导特征合成过程。大量实验证明,LDFS在未收集感兴趣领域的数据时,显著提高了CLIP的泛化能力。代码将公开发布。
URL
https://arxiv.org/abs/2405.02586