Paper Reading AI Learner

Evaluating Small Vision-Language Models on Distance-Dependent Traffic Perception

2025-10-09 15:38:41
Nikos Theodoridis, Tim Brophy, Reenu Mohandas, Ganesh Sistu, Fiachra Collins, Anthony Scanlan, Ciaran Eising

Abstract

Vision-Language Models (VLMs) are becoming increasingly powerful, demonstrating strong performance on a variety of tasks that require both visual and textual understanding. Their strong generalisation abilities make them a promising component for automated driving systems, which must handle unexpected corner cases. However, to be trusted in such safety-critical applications, a model must first possess a reliable perception system. Moreover, since critical objects and agents in traffic scenes are often at a distance, we require systems that are not "shortsighted", i.e., systems with strong perception capabilities at both close (up to 20 meters) and long (30+ meters) range. With this in mind, we introduce Distance-Annotated Traffic Perception Question Answering (DTPQA), the first Visual Question Answering (VQA) benchmark focused solely on perception-based questions in traffic scenes, enriched with distance annotations. By excluding questions that require reasoning, we ensure that model performance reflects perception capabilities alone. Since automated driving hardware has limited processing power and cannot support large VLMs, our study centers on smaller VLMs. More specifically, we evaluate several state-of-the-art (SOTA) small VLMs on DTPQA and show that, despite the simplicity of the questions, these models significantly underperform compared to humans (~60% average accuracy for the best-performing small VLM versus ~85% human performance). However, it is important to note that the human sample size was relatively small, which imposes statistical limitations. We also identify specific perception tasks, such as distinguishing left from right, that remain particularly challenging for these models.

Abstract (translated)

视觉-语言模型(VLMs)正变得越来越强大,在需要同时具备图像和文本理解能力的各种任务中表现出色。它们强大的泛化能力使其成为自动驾驶系统中的一个有前景的组件,因为这些系统必须能够处理意外情况。然而,为了在这些关键安全应用中获得信任,模型必须首先拥有可靠感知系统。此外,由于交通场景中的重要对象和代理往往距离较远,我们需要那些“不近视”的系统——即具有近距离(20米以内)和长距离(30米以上)强大感知能力的系统。 考虑到这一点,我们引入了带距离注释的交通感知问答(DTPQA),这是一个专注于交通场景中基于感知问题的视觉问答基准测试,并包含距离标注。通过排除需要推理的问题,确保模型性能仅反映其感知能力。鉴于自动驾驶硬件处理能力有限且无法支持大型VLMs,我们的研究重点在于小型VLMs上。具体来说,我们在DTPQA上评估了几种最先进的小型VLM,并展示了尽管问题相对简单,这些模型的性能远低于人类(最佳表现的小型VLM平均准确率为约60%,而人类表现为85%左右)。然而值得注意的是,由于人类样本量较小,统计限制可能较大。我们还识别出特定感知任务,例如区分左侧和右侧,对于这些模型来说仍然极具挑战性。

URL

https://arxiv.org/abs/2510.08352

PDF

https://arxiv.org/pdf/2510.08352.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot