Paper Reading AI Learner

CAN Coach: Vehicular Control through Human Cyber-Physical Systems

2021-04-08 16:08:19
M. Nice, S. Elmadani, R. Bhadani, M. Bunting, J. Sprinkle, D. Work

Abstract

This work addresses whether a human-in-the-loop cyber-physical system (HCPS) can be effective in improving the longitudinal control of an individual vehicle in a traffic flow. We introduce the CAN Coach, which is a system that gives feedback to the human-in-the-loop using radar data (relative speed and position information to objects ahead) that is available on the controller area network (CAN). Using a cohort of six human subjects driving an instrumented vehicle, we compare the ability of the human-in-the-loop driver to achieve a constant time-gap control policy using only human-based visual perception to the car ahead, and by augmenting human perception with audible feedback from CAN sensor data. The addition of CAN-based feedback reduces the mean time-gap error by an average of 73%, and also improves the consistency of the human by reducing the standard deviation of the time-gap error by 53%. We remove human perception from the loop using a ghost mode in which the human-in-the-loop is coached to track a virtual vehicle on the road, rather than a physical one. The loss of visual perception of the vehicle ahead degrades the performance for most drivers, but by varying amounts. We show that human subjects can match the velocity of the lead vehicle ahead with and without CAN-based feedback, but velocity matching does not offer regulation of vehicle spacing. The viability of dynamic time-gap control is also demonstrated. We conclude that (1) it is possible to coach drivers to improve performance on driving tasks using CAN data, and (2) it is a true HCPS, since removing human perception from the control loop reduces performance at the given control objective.

Abstract (translated)

URL

https://arxiv.org/abs/2104.06264

PDF

https://arxiv.org/pdf/2104.06264.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot