Paper Reading AI Learner

AI-Based Framework for Understanding Car Following Behaviors of Drivers in A Naturalistic Driving Environment

2023-01-23 08:24:33
Armstrong Aboah, Abdul Rashid Mussah, Yaw Adu-Gyamfi

Abstract

The most common type of accident on the road is a rear-end crash. These crashes have a significant negative impact on traffic flow and are frequently fatal. To gain a more practical understanding of these scenarios, it is necessary to accurately model car following behaviors that result in rear-end crashes. Numerous studies have been carried out to model drivers' car-following behaviors; however, the majority of these studies have relied on simulated data, which may not accurately represent real-world incidents. Furthermore, most studies are restricted to modeling the ego vehicle's acceleration, which is insufficient to explain the behavior of the ego vehicle. As a result, the current study attempts to address these issues by developing an artificial intelligence framework for extracting features relevant to understanding driver behavior in a naturalistic environment. Furthermore, the study modeled the acceleration of both the ego vehicle and the leading vehicle using extracted information from NDS videos. According to the study's findings, young people are more likely to be aggressive drivers than elderly people. In addition, when modeling the ego vehicle's acceleration, it was discovered that the relative velocity between the ego vehicle and the leading vehicle was more important than the distance between the two vehicles.

Abstract (translated)

在路上发生的常见事故是尾部碰撞。这些碰撞对交通流具有显著负面影响,并常常导致死亡。为了更实际地理解这些场景,必须准确地建模导致尾部碰撞的汽车跟随行为。众多研究都旨在模拟司机的汽车跟随行为;然而,大多数研究都依赖于模拟数据,这可能不准确反映现实世界的事件。此外,大多数研究仅限于建模自我车辆的速度,这不足以解释自我车辆的行为。因此,当前研究试图通过开发一个人工智能框架来提取与理解自然场景司机行为相关的特征。此外,研究使用从NDS视频中提取的信息建模自我车辆和领先车辆的速度。根据研究的结果,年轻人更有可能成为一名攻击性司机,而老年人则不太可能。此外,在建模自我车辆的速度时,发现相对速度比两辆车之间的距离更重要。

URL

https://arxiv.org/abs/2301.09315

PDF

https://arxiv.org/pdf/2301.09315.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot