Paper Reading AI Learner

TrafficVLM: A Controllable Visual Language Model for Traffic Video Captioning

2024-04-14 14:51:44
Quang Minh Dinh, Minh Khoi Ho, Anh Quan Dang, Hung Phong Tran

Abstract

Traffic video description and analysis have received much attention recently due to the growing demand for efficient and reliable urban surveillance systems. Most existing methods only focus on locating traffic event segments, which severely lack descriptive details related to the behaviour and context of all the subjects of interest in the events. In this paper, we present TrafficVLM, a novel multi-modal dense video captioning model for vehicle ego camera view. TrafficVLM models traffic video events at different levels of analysis, both spatially and temporally, and generates long fine-grained descriptions for the vehicle and pedestrian at different phases of the event. We also propose a conditional component for TrafficVLM to control the generation outputs and a multi-task fine-tuning paradigm to enhance TrafficVLM's learning capability. Experiments show that TrafficVLM performs well on both vehicle and overhead camera views. Our solution achieved outstanding results in Track 2 of the AI City Challenge 2024, ranking us third in the challenge standings. Our code is publicly available at this https URL.

Abstract (translated)

近年来,由于对高效且可靠的城郊监控系统需求的不断增加,交通视频描述和分析得到了广泛关注。目前,大多数现有方法仅关注于定位交通事件段,这严重缺乏与所有感兴趣对象的行为和上下文相关的详细描述。在本文中,我们提出了TrafficVLM,一种用于车辆自相机视场的多模态密集视频标注模型。TrafficVLM在不同的分析和空间水平上对交通视频事件进行建模,并生成不同事件阶段车辆和行人的详细描述。我们还提出了一种条件组件,用于控制TrafficVLM的生成输出,以及一种多任务微调范式,以增强TrafficVLM的学习能力。实验证明,TrafficVLM在车辆和 overhead 相机视图上表现出色。我们的解决方案在2024年AI城市挑战赛的第二部分(Track 2)中取得了突出成绩,排名第三。我们的代码公开可用,位于此链接:https://www.example.com。

URL

https://arxiv.org/abs/2404.09275

PDF

https://arxiv.org/pdf/2404.09275.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot