Paper Reading AI Learner

Mamba3D: Enhancing Local Features for 3D Point Cloud Analysis via State Space Model

2024-04-23 12:20:27
Xu Han, Yuan Tang, Zhaoxuan Wang, Xianzhi Li

Abstract

Existing Transformer-based models for point cloud analysis suffer from quadratic complexity, leading to compromised point cloud resolution and information loss. In contrast, the newly proposed Mamba model, based on state space models (SSM), outperforms Transformer in multiple areas with only linear complexity. However, the straightforward adoption of Mamba does not achieve satisfactory performance on point cloud tasks. In this work, we present Mamba3D, a state space model tailored for point cloud learning to enhance local feature extraction, achieving superior performance, high efficiency, and scalability potential. Specifically, we propose a simple yet effective Local Norm Pooling (LNP) block to extract local geometric features. Additionally, to obtain better global features, we introduce a bidirectional SSM (bi-SSM) with both a token forward SSM and a novel backward SSM that operates on the feature channel. Extensive experimental results show that Mamba3D surpasses Transformer-based counterparts and concurrent works in multiple tasks, with or without pre-training. Notably, Mamba3D achieves multiple SoTA, including an overall accuracy of 92.6% (train from scratch) on the ScanObjectNN and 95.1% (with single-modal pre-training) on the ModelNet40 classification task, with only linear complexity.

Abstract (translated)

现有的基于Transformer的点云分析模型存在多项式复杂性,导致点云分辨率降低和信息损失。相比之下,新提出的Mamba模型(基于状态空间模型)在多个方面优于Transformer,具有仅线性复杂性。然而,直接采用Mamba模型在点云任务上并不能达到令人满意的表现。在这项工作中,我们提出了Mamba3D,一种专为点云学习而设计的状态空间模型,以提高局部特征提取,实现卓越的性能、高效率和可扩展性。具体来说,我们提出了一个简单而有效的局部归一化(LNP)模块来提取局部几何特征。此外,为了获得更好的全局特征,我们引入了一种双向状态空间模型(bi-SSM),包括一个标记向前状态空间模型和一个新颖的 backward SSM,该模型在特征通道上操作。大量的实验结果表明,Mamba3D在多项任务上超过了基于Transformer的模型以及当前的工作,包括从零开始训练的准确度。值得注意的是,Mamba3D在多个SoTA上实现了卓越的表现,包括ScanObjectNN上的 overall accuracy 为92.6%(从头开始训练)和ModelNet40分类任务上的95.1%(单模态预训练)。它具有仅线性复杂性。

URL

https://arxiv.org/abs/2404.14966

PDF

https://arxiv.org/pdf/2404.14966.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot