Abstract
Vertical beam dropout in spinning LiDAR sensors triggered by hardware aging, dust, snow, fog, or bright reflections removes entire vertical slices from the point cloud and severely degrades 3D perception in autonomous vehicles. This paper proposes a Graph Attention Network (GAT)-based framework that reconstructs these missing vertical channels using only the current LiDAR frame, with no camera images or temporal information required. Each LiDAR sweep is represented as an unstructured spatial graph: points are nodes and edges connect nearby points while preserving the original beam-index ordering. A multi-layer GAT learns adaptive attention weights over local geometric neighborhoods and directly regresses the missing elevation (z) values at dropout locations. Trained and evaluated on 1,065 raw KITTI sequences with simulated channel dropout, the method achieves an average height RMSE of 11.67 cm, with 87.98% of reconstructed points falling within a 10 cm error threshold. Inference takes 14.65 seconds per frame on a single GPU, and reconstruction quality remains stable for different neighborhood sizes k. These results show that a pure graph attention model operating solely on raw point-cloud geometry can effectively recover dropped vertical beams under realistic sensor degradation.
Abstract (translated)
旋转激光雷达传感器由于硬件老化、灰尘、雪、雾或强烈反射等原因导致的垂直光束脱落会从点云中移除整个垂直切片,并严重损害自动驾驶车辆的3D感知能力。本文提出了一种基于图注意力网络(GAT)的框架,该框架仅使用当前的激光雷达帧来重建缺失的垂直通道,无需相机图像或时间信息。每个激光雷达扫描都被表示为一个无结构的空间图:点是节点,边连接附近的点并保持原始光束索引顺序。多层GAT通过局部几何邻域学习自适应注意力权重,并直接回归丢失的高度(z)值。在1,065个具有模拟通道脱落的原始KITTI序列上进行训练和评估后,该方法实现了平均高度RMSE为11.67厘米,87.98%重建点误差小于10厘米的结果。推理速度为单GPU每帧14.65秒,并且对于不同的邻域大小k,重建质量保持稳定。这些结果表明,仅基于原始点云几何的纯图注意力模型能够在现实中的传感器退化情况下有效恢复脱落的垂直光束。
URL
https://arxiv.org/abs/2512.12410