Paper Reading AI Learner

Crowd-sensing Simultaneous Localization and Radio Fingerprint Mapping based on Probabilistic Similarity Models

2019-04-26 08:29:25
Ran Liu, Sumudu Hasala Marakkalage, Madhushanka Padmal, Thiruketheeswaran Shaganan, Chau Yuen, Yong Liang Guan, U-Xuan Tan

Abstract

Simultaneous localization and mapping (SLAM) has been richly researched in past years particularly with regard to range-based or visual-based sensors. Instead of deploying dedicated devices that use visual features, it is more pragmatic to exploit the radio features to achieve this task, due to their ubiquitous nature and the wide deployment of Wifi wireless network. In this paper, we present a novel approach for crowd-sensing simultaneous localization and radio fingerprint mapping (C-SLAM-RF) in large unknown indoor environments. The proposed system makes use of the received signal strength (RSS) from surrounding Wifi access points (AP) and the motion tracking data from a smart phone (Tango as an example). These measurements are captured duration the walking of multiple users in unknown environments without map information and location of the AP. The experiments were done in a university building with dynamic environment and the results show that the proposed system is capable of estimating the tracks of a group of users with an accuracy of 1.74 meters when compared to the ground truth acquired from a point cloud-based SLAM.

Abstract (translated)

在过去的几年中,同步定位和绘图(SLAM)已经得到了大量的研究,特别是在基于距离或视觉的传感器方面。由于无线功能的普遍性和WiFi无线网络的广泛部署,利用无线功能来完成这项任务,而不是部署使用可视功能的专用设备,这更为实际。本文提出了一种在大型未知室内环境中群体感应同步定位和无线电指纹映射(C-SLAM-RF)的新方法。该系统利用周围无线接入点(AP)接收到的信号强度(RSS)和智能手机的运动跟踪数据(以探戈为例)。这些测量是在不带地图信息和AP位置的未知环境中多个用户行走的持续时间内捕获的。实验在一所具有动态环境的大学建筑中进行,结果表明,该系统与基于点云的SLAM所获得的地面实况相比,具有1.74米的精度,能够对一组用户的轨迹进行估计。

URL

https://arxiv.org/abs/1904.11712

PDF

https://arxiv.org/pdf/1904.11712.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot