Abstract
Vision-Language Models (VLMs) are becoming increasingly powerful, demonstrating strong performance on a variety of tasks that require both visual and textual understanding. Their strong generalisation abilities make them a promising component for automated driving systems, which must handle unexpected corner cases. However, to be trusted in such safety-critical applications, a model must first possess a reliable perception system. Moreover, since critical objects and agents in traffic scenes are often at a distance, we require systems that are not "shortsighted", i.e., systems with strong perception capabilities at both close (up to 20 meters) and long (30+ meters) range. With this in mind, we introduce Distance-Annotated Traffic Perception Question Answering (DTPQA), the first Visual Question Answering (VQA) benchmark focused solely on perception-based questions in traffic scenes, enriched with distance annotations. By excluding questions that require reasoning, we ensure that model performance reflects perception capabilities alone. Since automated driving hardware has limited processing power and cannot support large VLMs, our study centers on smaller VLMs. More specifically, we evaluate several state-of-the-art (SOTA) small VLMs on DTPQA and show that, despite the simplicity of the questions, these models significantly underperform compared to humans (~60% average accuracy for the best-performing small VLM versus ~85% human performance). However, it is important to note that the human sample size was relatively small, which imposes statistical limitations. We also identify specific perception tasks, such as distinguishing left from right, that remain particularly challenging for these models.
Abstract (translated)
视觉-语言模型(VLMs)正变得越来越强大,在需要同时具备图像和文本理解能力的各种任务中表现出色。它们强大的泛化能力使其成为自动驾驶系统中的一个有前景的组件,因为这些系统必须能够处理意外情况。然而,为了在这些关键安全应用中获得信任,模型必须首先拥有可靠感知系统。此外,由于交通场景中的重要对象和代理往往距离较远,我们需要那些“不近视”的系统——即具有近距离(20米以内)和长距离(30米以上)强大感知能力的系统。 考虑到这一点,我们引入了带距离注释的交通感知问答(DTPQA),这是一个专注于交通场景中基于感知问题的视觉问答基准测试,并包含距离标注。通过排除需要推理的问题,确保模型性能仅反映其感知能力。鉴于自动驾驶硬件处理能力有限且无法支持大型VLMs,我们的研究重点在于小型VLMs上。具体来说,我们在DTPQA上评估了几种最先进的小型VLM,并展示了尽管问题相对简单,这些模型的性能远低于人类(最佳表现的小型VLM平均准确率为约60%,而人类表现为85%左右)。然而值得注意的是,由于人类样本量较小,统计限制可能较大。我们还识别出特定感知任务,例如区分左侧和右侧,对于这些模型来说仍然极具挑战性。
URL
https://arxiv.org/abs/2510.08352