Paper Reading AI Learner

On the test-time zero-shot generalization of vision-language models: Do we really need prompt learning?

2024-05-03 17:34:02
Maxime Zanella, Ismail Ben Ayed

Abstract

The development of large vision-language models, notably CLIP, has catalyzed research into effective adaptation techniques, with a particular focus on soft prompt tuning. Conjointly, test-time augmentation, which utilizes multiple augmented views of a single image to enhance zero-shot generalization, is emerging as a significant area of interest. This has predominantly directed research efforts toward test-time prompt tuning. In contrast, we introduce a robust MeanShift for Test-time Augmentation (MTA), which surpasses prompt-based methods without requiring this intensive training procedure. This positions MTA as an ideal solution for both standalone and API-based applications. Additionally, our method does not rely on ad hoc rules (e.g., confidence threshold) used in some previous test-time augmentation techniques to filter the augmented views. Instead, MTA incorporates a quality assessment variable for each view directly into its optimization process, termed as the inlierness score. This score is jointly optimized with a density mode seeking process, leading to an efficient training- and hyperparameter-free approach. We extensively benchmark our method on 15 datasets and demonstrate MTA's superiority and computational efficiency. Deployed easily as plug-and-play module on top of zero-shot models and state-of-the-art few-shot methods, MTA shows systematic and consistent improvements.

Abstract (translated)

大视觉语言模型的开发,特别是CLIP,已经推动了有效适应技术的研究,特别是对软提示进行优化。同时,测试时间增强,利用单张图像的多个增强视图来提高零样本通用性,正在成为一个有趣的领域。这一方向主要将研究精力集中在测试时间提示调整上。相比之下,我们引入了一个稳健的MeanShift for Test-time Augmentation(MTA),它超过了需要这种密集训练过程的基于提示的方法。这使得MTA成为适用于离线和API基础应用的理想解决方案。此外,我们的方法不依赖于某些以前测试时间增强技术中使用的临界值(例如置信度阈值)来过滤增强视图。相反,MTA将每个视图的直接质量评估量融入优化过程,称为异常得分。这个分数与密度模式寻求过程共同优化,导致了一种高效的学习- 和超参数- 免费的方法。我们在15个数据集上对方法进行了广泛的基准,证明了MTA的优越性和计算效率。部署容易地作为零样本模型和最先进的少量样本方法的插件,MTA显示出系统性和一致性的改进。

URL

https://arxiv.org/abs/2405.02266

PDF

https://arxiv.org/pdf/2405.02266.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot