Paper Reading AI Learner

Can we learn where people come from? Retracing of origins in merging situations

2020-12-21 17:42:14
Marion Gödel, Luca Spataro, Gerta Köster

Abstract

One crucial information for a pedestrian crowd simulation is the number of agents moving from an origin to a certain target. While this setup has a large impact on the simulation, it is in most setups challenging to find the number of agents that should be spawned at a source in the simulation. Often, number are chosen based on surveys and experience of modelers and event organizers. These approaches are important and useful but reach their limits when we want to perform real-time predictions. In this case, a static information about the inflow is not sufficient. Instead, we need a dynamic information that can be retrieved each time the prediction is started. Nowadays, sensor data such as video footage or GPS tracks of a crowd are often available. If we can estimate the number of pedestrians who stem from a certain origin from this sensor data, we can dynamically initialize the simulation. In this study, we use density heatmaps that can be derived from sensor data as input for a random forest regressor to predict the origin distributions. We study three different datasets: A simulated dataset, experimental data, and a hybrid approach with both experimental and simulated data. In the hybrid setup, the model is trained with simulated data and then tested on experimental data. The results demonstrate that the random forest model is able to predict the origin distribution based on a single density heatmap for all three configurations. This is especially promising for applying the approach on real data since there is often only a limited amount of data available.

Abstract (translated)

URL

https://arxiv.org/abs/2012.11527

PDF

https://arxiv.org/pdf/2012.11527.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot