Paper Reading AI Learner

MASK4D: Mask Transformer for 4D Panoptic Segmentation

2023-09-28 03:30:50
Kadir Yilmaz, Jonas Schult, Alexey Nekrasov, Bastian Leibe

Abstract

Accurately perceiving and tracking instances over time is essential for the decision-making processes of autonomous agents interacting safely in dynamic environments. With this intention, we propose Mask4D for the challenging task of 4D panoptic segmentation of LiDAR point clouds. Mask4D is the first transformer-based approach unifying semantic instance segmentation and tracking of sparse and irregular sequences of 3D point clouds into a single joint model. Our model directly predicts semantic instances and their temporal associations without relying on any hand-crafted non-learned association strategies such as probabilistic clustering or voting-based center prediction. Instead, Mask4D introduces spatio-temporal instance queries which encode the semantic and geometric properties of each semantic tracklet in the sequence. In an in-depth study, we find that it is critical to promote spatially compact instance predictions as spatio-temporal instance queries tend to merge multiple semantically similar instances, even if they are spatially distant. To this end, we regress 6-DOF bounding box parameters from spatio-temporal instance queries, which is used as an auxiliary task to foster spatially compact predictions. Mask4D achieves a new state-of-the-art on the SemanticKITTI test set with a score of 68.4 LSTQ, improving upon published top-performing methods by at least +4.5%.

Abstract (translated)

准确的时间感知和跟踪对于动态环境中安全自主代理交互的决策过程至关重要。为了实现这一目标,我们提出了Mask4D,以解决LiDAR点云4DPanoptic分割的挑战性任务。Mask4D是第一种基于Transformer的方法,将语义实例分割和跟踪稀疏和不规则3D点云序列统一到一个联合模型中。我们的模型直接预测语义实例及其时间关联,而不需要依赖任何手制非学习性关联策略,如概率簇集或投票中心预测。相反,Mask4D引入了时间实例查询,编码每个语义跟踪条目的语义和几何属性,在序列中。在深入研究中发现,促进空间紧凑实例预测非常重要,因为时间实例查询往往会将多个语义相似实例合并,即使它们空间上相距很远。为此,我们恢复了6DOF边界框参数从时间实例查询,并将其用作辅助任务,以促进空间紧凑的预测。Mask4D在语义KITTI测试集上取得了新的先进技术水平,得分为68.4LSTQ,比已发布的最佳方法至少提高了4.5%。

URL

https://arxiv.org/abs/2309.16133

PDF

https://arxiv.org/pdf/2309.16133.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot