Abstract
Pre-trained contrastive vision-language models have demonstrated remarkable performance across a wide range of tasks. However, they often struggle on fine-trained datasets with categories not adequately represented during pre-training, which makes adaptation necessary. Recent works have shown promising results by utilizing samples from web-scale databases for retrieval-augmented adaptation, especially in low-data regimes. Despite the empirical success, understanding how retrieval impacts the adaptation of vision-language models remains an open research question. In this work, we adopt a reflective perspective by presenting a systematic study to understand the roles of key components in retrieval-augmented adaptation. We unveil new insights on uni-modal and cross-modal retrieval and highlight the critical role of logit ensemble for effective adaptation. We further present theoretical underpinnings that directly support our empirical observations.
Abstract (translated)
预训练的对比性视觉语言模型在各种各样的任务上都表现出惊人的性能。然而,它们通常在训练过程中遇到在预训练过程中未得到充分代表的类别的数据,因此需要进行调整。最近的工作通过利用大规模网络数据库的样本进行检索增强适应,尤其是在低数据量的情况下,取得了积极的结果。尽管经验证实的成功,但理解检索如何影响视觉语言模型的适应仍然是一个开放的研究问题。在这项工作中,我们采用反思性观点,通过系统地研究检索增强适应的关键组件,揭示了有关单模态和跨模态检索的新见解,并突出了逻辑集成对于有效适应的关键作用。我们进一步提供了理论支持,直接支持我们的实证观察。
URL
https://arxiv.org/abs/2405.01468