Paper Reading AI Learner

Towards Explainable Indoor Localization: Interpreting Neural Network Learning on Wi-Fi Fingerprints Using Logic Gates

2025-06-18 15:34:41
Danish Gufran, Sudeep Pasricha

Abstract

Indoor localization using deep learning (DL) has demonstrated strong accuracy in mapping Wi-Fi RSS fingerprints to physical locations; however, most existing DL frameworks function as black-box models, offering limited insight into how predictions are made or how models respond to real-world noise over time. This lack of interpretability hampers our ability to understand the impact of temporal variations - caused by environmental dynamics - and to adapt models for long-term reliability. To address this, we introduce LogNet, a novel logic gate-based framework designed to interpret and enhance DL-based indoor localization. LogNet enables transparent reasoning by identifying which access points (APs) are most influential for each reference point (RP) and reveals how environmental noise disrupts DL-driven localization decisions. This interpretability allows us to trace and diagnose model failures and adapt DL systems for more stable long-term deployments. Evaluations across multiple real-world building floorplans and over two years of temporal variation show that LogNet not only interprets the internal behavior of DL models but also improves performance-achieving up to 1.1x to 2.8x lower localization error, 3.4x to 43.3x smaller model size, and 1.5x to 3.6x lower latency compared to prior DL-based models.

Abstract (translated)

基于深度学习(DL)的室内定位技术已经在将Wi-Fi信号强度指纹映射到物理位置方面展现出了强大的准确性;然而,大多数现有的DL框架作为黑盒模型运行,提供对预测机制或模型在现实世界噪声干扰下的响应变化的见解有限。这种解释性的缺乏阻碍了我们理解由环境动态引起的时变性影响的能力,并且限制了适应长期可靠性所需的模型调整能力。为了应对这一挑战,我们引入了一种新颖的逻辑门基框架——LogNet,旨在解析和增强基于DL的室内定位技术。 LogNet通过识别每个参考点(RP)中最具影响力的接入点(AP),以实现透明化的推理,并揭示环境噪声如何扰乱DL驱动下的定位决策。这种解释性使我们能够追踪并诊断模型失败的原因,并对DL系统进行调整,使其在长期部署中的表现更加稳定。 跨多个真实世界建筑物楼层布局和两年时间跨度的变异性评估显示,LogNet不仅解析了DL模型内部行为,还提升了性能:它将定位误差降低了1.1倍到2.8倍,模型尺寸减小了3.4倍到43.3倍,并且运行延迟降低了1.5倍至3.6倍,相比之前的基于DL的模型来说。

URL

https://arxiv.org/abs/2506.15559

PDF

https://arxiv.org/pdf/2506.15559.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot