Paper Reading AI Learner

Generalizing CLIP to Unseen Domain via Text-Guided Diverse Novel Feature Synthesis

2024-05-04 06:53:18
Siyuan Yan, Cheng Luo, Zhen Yu, Zongyuan Ge

Abstract

Vision-language foundation models like CLIP have shown impressive zero-shot generalization, but finetuning on downstream datasets can cause overfitting and loss of its generalization ability on unseen domains. Although collecting additional data from new domains of interest is possible, this method is often impractical due to the challenges in obtaining annotated data. To address this, we propose a plug-and-play feature augmentation method called LDFS (Language-Guided Diverse Feature Synthesis) to synthesize new domain features and improve existing CLIP fine-tuning strategies. LDFS has three main contributions: 1) To synthesize novel domain features and promote diversity, we propose an instance-conditional feature augmentation strategy based on a textguided feature augmentation loss. 2) To maintain feature quality after augmenting, we introduce a pairwise regularizer to preserve augmented feature coherence within the CLIP feature space. 3) We propose to use stochastic text feature augmentation to reduce the modality gap and further facilitate the process of text-guided feature synthesis. Extensive experiments show LDFS superiority in improving CLIP generalization ability on unseen domains without collecting data from those domains. The code will be made publicly available.

Abstract (translated)

类似于CLIP的视觉语言基础模型已经展示了令人印象深刻的零样本泛化,但在下游数据集上进行微调可能导致过拟合和在未见过的域上失去泛化能力。尽管可以从感兴趣的新领域收集额外数据,但这种方法通常由于获取标注数据的有挑战性而不可行。为了解决这个问题,我们提出了一个插件式特征增强方法,称为LDFS(语言指导的多样特征合成),以合成新的领域特征并改进现有的CLIP微调策略。LDFS有三个主要贡献:1)为了合成新的领域特征并促进多样性,我们提出了一个基于文本指导的特征增强策略。2)为了在微调后保留特征质量,我们引入了一对互斥正则化器来保留CLIP特征空间中增强的特征相关性。3)我们提出使用随机文本特征增强来减少模态差距,并进一步促进文本指导特征合成过程。大量实验证明,LDFS在未收集感兴趣领域的数据时,显著提高了CLIP的泛化能力。代码将公开发布。

URL

https://arxiv.org/abs/2405.02586

PDF

https://arxiv.org/pdf/2405.02586.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot