Abstract
We present a mapping system capable of constructing detailed instance-level semantic models of room-sized indoor environments by means of an RGB-D camera. In this work, we integrate deep-learning based instance segmentation and classification into a state of the art RGB-D SLAM system. We leverage the pipeline of ElasticFusion \cite{whelan2016elasticfusion} as a backbone, and propose modifications of the registration cost function to make full use of the instance class labels in the process. The proposed objective function features tunable weights for the depth, appearance, and semantic information channels, which can be learned from data. The resulting system is capable of producing accurate semantic maps of room-sized environments, as well as reconstructing highly detailed object-level models. The developed method has been verified through experimental validation on the TUM RGB-D SLAM benchmark and the YCB video dataset. Our results confirmed that the proposed system performs favorably in terms of trajectory estimation, surface reconstruction, and segmentation quality in comparison to other state-of-the-art systems.
Abstract (translated)
提出了一种基于RGB-D摄像机的室内环境实例级语义模型映射系统。在这项工作中,我们将基于深度学习的实例分割和分类集成到一个最先进的RGB-D SLAM系统中。我们利用elasticfusioncite whelan2016elasticfusion的管道作为主干,提出注册成本函数的修改,以充分利用过程中的实例类标签。提出的目标函数具有深度、外观和语义信息通道的可调权重,可以从数据中学习。由此产生的系统能够生成房间大小环境的精确语义图,以及重建高度详细的对象级模型。通过对TUM-RGB-D SLAM基准和YCB视频数据集的实验验证,验证了该方法的有效性。我们的结果证实,与其他最先进的系统相比,该系统在轨迹估计、曲面重建和分割质量方面表现良好。
URL
https://arxiv.org/abs/1903.10782