Abstract
Out-of-Distribution (OOD) detection is a critical task that has garnered significant attention. The emergence of CLIP has spurred extensive research into zero-shot OOD detection, often employing a training-free approach. Current methods leverage expert knowledge from large language models (LLMs) to identify potential outliers. However, these approaches tend to over-rely on knowledge in the text space, neglecting the inherent challenges involved in detecting out-of-distribution samples in the image space. In this paper, we propose a novel pipeline, MM-OOD, which leverages the multimodal reasoning capabilities of MLLMs and their ability to conduct multi-round conversations for enhanced outlier detection. Our method is designed to improve performance in both near OOD and far OOD tasks. Specifically, (1) for near OOD tasks, we directly feed ID images and corresponding text prompts into MLLMs to identify potential outliers; and (2) for far OOD tasks, we introduce the sketch-generate-elaborate framework: first, we sketch outlier exposure using text prompts, then generate corresponding visual OOD samples, and finally elaborate by using multimodal prompts. Experiments demonstrate that our method achieves significant improvements on widely used multimodal datasets such as Food-101, while also validating its scalability on ImageNet-1K.
Abstract (translated)
分布外(OOD)检测是一项关键任务,近年来受到了广泛关注。CLIP模型的出现激发了大量关于零样本OOD检测的研究,这些研究通常采用无需训练的方法。目前的方法依赖于大型语言模型(LLMs)中的专家知识来识别潜在异常值,但它们往往过度依赖文本空间的知识,而忽视了在图像空间中检测分布外样本所面临的固有挑战。为此,在本文中我们提出了一种新型管道——MM-OOD,该方法利用多模态大语言模型的多模态推理能力和进行多轮对话的能力来增强异常值检测效果。我们的方法旨在提升近OOD和远OOD任务中的性能表现。具体来说: 1. 对于近OOD任务,我们将标准ID图像及相应的文本提示直接输入到多模态LLMs中以识别潜在异常; 2. 对于远OOD任务,我们引入了草图生成详细说明框架:首先使用文本提示进行分布外样本的草图绘制,然后生成对应的视觉OOD样本,并通过利用多模态提示来进一步详述。 实验结果表明,我们的方法在诸如Food-101等广泛使用的多模态数据集上取得了显著改进,同时验证了其在ImageNet-1K上的可扩展性。
URL
https://arxiv.org/abs/2601.14052