Paper Reading AI Learner

A Rapid Adapting and Continual Learning Spiking Neural Network Path Planning Algorithm for Mobile Robots

2024-04-23 21:13:17
Harrison Espino, Robert Bain, Jeffrey L. Krichmar

Abstract

Mapping traversal costs in an environment and planning paths based on this map are important for autonomous navigation. We present a neurobotic navigation system that utilizes a Spiking Neural Network Wavefront Planner and E-prop learning to concurrently map and plan paths in a large and complex environment. We incorporate a novel method for mapping which, when combined with the Spiking Wavefront Planner, allows for adaptive planning by selectively considering any combination of costs. The system is tested on a mobile robot platform in an outdoor environment with obstacles and varying terrain. Results indicate that the system is capable of discerning features in the environment using three measures of cost, (1) energy expenditure by the wheels, (2) time spent in the presence of obstacles, and (3) terrain slope. In just twelve hours of online training, E-prop learns and incorporates traversal costs into the path planning maps by updating the delays in the Spiking Wavefront Planner. On simulated paths, the Spiking Wavefront Planner plans significantly shorter and lower cost paths than A* and RRT*. The spiking wavefront planner is compatible with neuromorphic hardware and could be used for applications requiring low size, weight, and power.

Abstract (translated)

映射环境中的穿行成本并进行路径规划对于自主导航非常重要。我们提出了一个神经机器人导航系统,该系统利用Spiking Neural Network Wavefront Planner和E-prop学习在大型和复杂的环境中同时映射和规划路径。我们引入了一种新的映射方法,当与Spiking Wavefront Planner结合时,可以通过选择性地考虑任何成本组合来进行自适应规划。该系统在户外环境中的移动机器人平台上进行了测试,并遇到了障碍物和不同的地形。结果表明,系统能够通过三种成本指标(1)轮子消耗的能量,2)与障碍物相伴的时间,3)地形斜率来辨别环境特征。仅在在线训练的12小时内,E-prop就学会了将穿行成本纳入路径规划图,并通过更新Spiking Wavefront Planner中的延迟来完成。在模拟路径上,Spiking Wavefront Planner规划的路径比A*和RRT短得多,且成本更低。spiking wavefront planner与神经元硬件兼容,可以用于需要低尺寸、重量和功率的应用。

URL

https://arxiv.org/abs/2404.15524

PDF

https://arxiv.org/pdf/2404.15524.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot