Paper Reading AI Learner

Sparse-Graph-Enabled Formation Planning for Large-Scale Aerial Swarms

2024-03-26 00:35:06
Yuan Zhou, Lun Quan, Chao Xu, Guangtong Xu, Fei Gao

Abstract

The formation trajectory planning using complete graphs to model collaborative constraints becomes computationally intractable as the number of drones increases due to the curse of dimensionality. To tackle this issue, this paper presents a sparse graph construction method for formation planning to realize better efficiency-performance trade-off. Firstly, a sparsification mechanism for complete graphs is designed to ensure the global rigidity of sparsified graphs, which is a necessary condition for uniquely corresponding to a geometric shape. Secondly, a good sparse graph is constructed to preserve the main structural feature of complete graphs sufficiently. Since the graph-based formation constraint is described by Laplacian matrix, the sparse graph construction problem is equivalent to submatrix selection, which has combinatorial time complexity and needs a scoring metric. Via comparative simulations, the Max-Trace matrix-revealing metric shows the promising performance. The sparse graph is integrated into the formation planning. Simulation results with 72 drones in complex environments demonstrate that when preserving 30\% connection edges, our method has comparative formation error and recovery performance w.r.t. complete graphs. Meanwhile, the planning efficiency is improved by approximate an order of magnitude. Benchmark comparisons and ablation studies are conducted to fully validate the merits of our method.

Abstract (translated)

使用完整图模型建模协同约束的轨迹规划变得计算复杂,因为维度诅咒。为解决这一问题,本文提出了一种稀疏图构建方法,以实现更好的效率-性能权衡。首先,设计了一个稀疏化机制,以确保稀疏化图的全局刚度,这是稀疏化图形与几何形状唯一对应的一个必要条件。其次,为了保留完整图形的主要结构特征,构建了一个良好的稀疏图。由于基于图形的规划约束用拉普拉斯矩阵表示,稀疏图构建问题等价于子矩阵选择,具有组合时间复杂度和需要评分指标。通过比较仿真,Max-Trace矩阵揭示 metric显示出有前景的性能。稀疏图被整合到轨迹规划中。在复杂环境中,具有 72 个无人机的仿真结果表明,在保留 30% 的连接边时,我们的方法具有与完整图形相当的组建误差和恢复性能。同时,通过近似 order of magnitude 的规划效率得到了提高。通过基准比较和消融研究,全面验证了我们的方法的优点。

URL

https://arxiv.org/abs/2403.17288

PDF

https://arxiv.org/pdf/2403.17288.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot