Abstract
We contribute a dense SLAM system that takes a live stream of depth images as input and reconstructs non-rigid deforming scenes in real time, without templates or prior models. In contrast to existing approaches, we do not maintain any volumetric data structures, such as truncated signed distance function (TSDF) fields or deformation fields, which are performance and memory intensive. Our system works with a flat point (surfel) based representation of geometry, which can be directly acquired from commodity depth sensors. Standard graphics pipelines and general purpose GPU (GPGPU) computing are leveraged for all central operations: i.e., nearest neighbor maintenance, non-rigid deformation field estimation and fusion of depth measurements. Our pipeline inherently avoids expensive volumetric operations such as marching cubes, volumetric fusion and dense deformation field update, leading to significantly improved performance. Furthermore, the explicit and flexible surfel based geometry representation enables efficient tackling of topology changes and tracking failures, which makes our reconstructions consistent with updated depth observations. Our system allows robots to maintain a scene description with non-rigidly deformed objects that potentially enables interactions with dynamic working environments.
Abstract (translated)
我们提供了一个密集的SLAM系统,该系统以实时的深度图像流作为输入,并实时重建非刚性变形场景,无需模板或先前的模型。与现有的方法相比,我们不维护任何体积数据结构,例如截断有符号距离函数(tsdf)字段或变形字段,这些都是性能和内存密集型的。我们的系统使用基于平点(surfel)的几何表示,可以直接从商品深度传感器获取。标准图形管道和通用GPU(GPGPU)计算用于所有中央操作:即最近邻维护、非刚性变形场估计和深度测量融合。我们的管道从本质上避免了昂贵的体积操作,例如行进立方体、体积融合和密集变形场更新,从而显著提高了性能。此外,基于Surfel的清晰和灵活的几何表示可以有效地处理拓扑变化和跟踪故障,这使得我们的重建与更新的深度观测一致。我们的系统允许机器人使用非刚性变形的物体来维护场景描述,这些物体有可能与动态工作环境进行交互。
URL
https://arxiv.org/abs/1904.13073