Abstract
Out-of-distribution (OOD) detection is essential in autonomous driving, to determine when learning-based components encounter unexpected inputs. Traditional detectors typically use encoder models with fixed settings, thus lacking effective human interaction capabilities. With the rise of large foundation models, multimodal inputs offer the possibility of taking human language as a latent representation, thus enabling language-defined OOD detection. In this paper, we use the cosine similarity of image and text representations encoded by the multimodal model CLIP as a new representation to improve the transparency and controllability of latent encodings used for visual anomaly detection. We compare our approach with existing pre-trained encoders that can only produce latent representations that are meaningless from the user's standpoint. Our experiments on realistic driving data show that the language-based latent representation performs better than the traditional representation of the vision encoder and helps improve the detection performance when combined with standard representations.
Abstract (translated)
离散(OOD)检测在自动驾驶中至关重要,以确定学习基础组件何时遇到意外输入。传统的检测方法通常使用固定设置的编码器模型,因此缺乏有效的人机交互能力。随着大型基础模型的发展,多模态输入提供了将人类语言作为潜在表示的机会,从而实现了语言定义的OOD检测。在本文中,我们将多模态模型CLIP中编码器模型生成的图像和文本表示的余弦相似性作为新的表示,以提高用于视觉异常检测的潜在表示的可视化和可控制性。我们将我们的方法与只能从用户角度产生无意义潜在表示的现有预训练编码器进行比较。我们对现实驾驶数据的实验结果表明,基于语言的潜在表示 perform better than the traditional representation of the vision encoder and helps improve detection performance when combined with standard representations.
URL
https://arxiv.org/abs/2405.01691