Abstract
Storing and transmitting LiDAR point cloud data is essential for many AV applications, such as training data collection, remote control, cloud services or SLAM. However, due to the sparsity and unordered structure of the data, it is difficult to compress point cloud data to a low volume. Transforming the raw point cloud data into a dense 2D matrix structure is a promising way for applying compression algorithms. We propose a new lossless and calibrated 3D-to-2D transformation which allows compression algorithms to efficiently exploit spatial correlations within the 2D representation. To compress the structured representation, we use common image compression methods and also a self-supervised deep compression approach using a recurrent neural network. We also rearrange the LiDAR's intensity measurements to a dense 2D representation and propose a new metric to evaluate the compression performance of the intensity. Compared to approaches that are based on generic octree point cloud compression or based on raw point cloud data compression, our approach achieves the best quantitative and visual performance. Source code and dataset are available at this https URL.
Abstract (translated)
存储和传输激光雷达点云数据对于许多自动驾驶应用(如训练数据收集、远程控制、云计算或SLAM)至关重要。然而,由于数据稀疏且无序的结构,将其压缩到低体积是非常困难的。将原始点云数据转换为密集的2D矩阵结构是应用压缩算法的一种有前途的方法。我们提出了一种新的无损失且校准的3D到2D变换,允许压缩算法有效地利用2D表示中的空间相关性。为了压缩结构化表示,我们使用常见的图像压缩方法和自监督的深度压缩方法(使用循环神经网络)。我们还将激光雷达的强度测量重新排列为密集的2D表示,并提出了一个新指标来评估强度压缩性能。与基于通用Octree点云压缩或基于原始点云数据压缩的方法相比,我们的方法在数量和视觉方面都取得了最佳结果。源代码和数据集可在此处访问:https://www.example.com/。
URL
https://arxiv.org/abs/2402.11680