Abstract
Existing state-of-the-art 3D point cloud understanding methods merely perform well in a fully supervised manner. To the best of our knowledge, there exists no unified framework that simultaneously solves the downstream high-level understanding tasks including both segmentation and detection, especially when labels are extremely limited. This work presents a general and simple framework to tackle point cloud understanding when labels are limited. The first contribution is that we have done extensive methodology comparisons of traditional and learned 3D descriptors for the task of weakly supervised 3D scene understanding, and validated that our adapted traditional PFH-based 3D descriptors show excellent generalization ability across different domains. The second contribution is that we proposed a learning-based region merging strategy based on the affinity provided by both the traditional/learned 3D descriptors and learned semantics. The merging process takes both low-level geometric and high-level semantic feature correlations into consideration. Experimental results demonstrate that our framework has the best performance among the three most important weakly supervised point clouds understanding tasks including semantic segmentation, instance segmentation, and object detection even when very limited number of points are labeled. Our method, termed Region Merging 3D (RM3D), has superior performance on ScanNet data-efficient learning online benchmarks and other four large-scale 3D understanding benchmarks under various experimental settings, outperforming current arts by a margin for various 3D understanding tasks without complicated learning strategies such as active learning.
Abstract (translated)
目前最先进的3D点云理解方法仅在完全监督的方式下表现良好。据我们所知,还没有一个统一框架能够同时解决下游的高层次理解任务,尤其是当标签非常有限时。本文提出了一种通用的简单框架来解决标签有限时的点云理解问题。第一个贡献是我们对传统和学习3D描述符在弱监督3D场景理解任务上的方法进行了广泛的比较,并验证了我们自适应的传统PFH-based 3D描述符具有良好的泛化能力。第二个贡献是我们基于传统/学习3D描述符和学习语义提出了基于相似性的区域合并策略。合并过程考虑了低级几何和高级语义特征的相关性。实验结果表明,在三个最重要的弱监督点云理解任务(包括语义分割、实例分割和物体检测)中,我们的框架在标签非常有限的情况下具有最佳性能。我们的方法被称为区域合并3D(RM3D),在各种实验设置下的ScanNet数据高效学习在线基准和其他四个大型3D理解基准上具有卓越的性能,超过了没有复杂学习策略的各个3D理解任务。
URL
https://arxiv.org/abs/2312.01262