Abstract
The goal of selective prediction is to allow an a model to abstain when it may not be able to deliver a reliable prediction, which is important in safety-critical contexts. Existing approaches to selective prediction typically require access to the internals of a model, require retraining a model or study only unimodal models. However, the most powerful models (e.g. GPT-4) are typically only available as black boxes with inaccessible internals, are not retrainable by end-users, and are frequently used for multimodal tasks. We study the possibility of selective prediction for vision-language models in a realistic, black-box setting. We propose using the principle of \textit{neighborhood consistency} to identify unreliable responses from a black-box vision-language model in question answering tasks. We hypothesize that given only a visual question and model response, the consistency of the model's responses over the neighborhood of a visual question will indicate reliability. It is impossible to directly sample neighbors in feature space in a black-box setting. Instead, we show that it is possible to use a smaller proxy model to approximately sample from the neighborhood. We find that neighborhood consistency can be used to identify model responses to visual questions that are likely unreliable, even in adversarial settings or settings that are out-of-distribution to the proxy model.
Abstract (translated)
选择性预测的目标是让模型在可能无法提供可靠预测时进行 abstain,这对于安全关键场景非常重要。现有方法进行选择性预测通常需要访问模型的内部,仅对单模态模型进行重新训练。然而,最强大的模型(如 GPT-4)通常仅作为无法访问内部的黑盒模型提供,无法通过终端用户进行重新训练,并且通常用于多模态任务。我们在一个现实的黑盒环境中研究了对于视觉语言模型的选择性预测可能性。我们提出了利用邻域一致性原则从黑盒视觉语言模型中识别不可靠响应的想法。我们假设仅给定一个视觉问题和模型响应,模型回答的邻域内的一致性将表明可靠性。在黑盒环境中直接采样邻居是不可能的。相反,我们证明了使用较小的代理模型来近似采样邻居是可能的。我们发现,邻域一致性可用于识别在 adversarial 设置或与代理模型分布不相同的设置中模型对视觉问题的不可靠响应。
URL
https://arxiv.org/abs/2404.10193