Abstract
We present GDFusion, a temporal fusion method for vision-based 3D semantic occupancy prediction (VisionOcc). GDFusion opens up the underexplored aspects of temporal fusion within the VisionOcc framework, focusing on both temporal cues and fusion strategies. It systematically examines the entire VisionOcc pipeline, identifying three fundamental yet previously overlooked temporal cues: scene-level consistency, motion calibration, and geometric complementation. These cues capture diverse facets of temporal evolution and make distinct contributions across various modules in the VisionOcc framework. To effectively fuse temporal signals across heterogeneous representations, we propose a novel fusion strategy by reinterpreting the formulation of vanilla RNNs. This reinterpretation leverages gradient descent on features to unify the integration of diverse temporal information, seamlessly embedding the proposed temporal cues into the network. Extensive experiments on nuScenes demonstrate that GDFusion significantly outperforms established baselines. Notably, on Occ3D benchmark, it achieves 1.4\%-4.8\% mIoU improvements and reduces memory consumption by 27\%-72\%.
Abstract (translated)
我们提出了GDFusion,这是一种用于基于视觉的三维语义占用预测(VisionOcc)的时间融合方法。GDFusion 开拓了在 VisionOcc 框架内未充分探索的时间融合方面,专注于时间线索和融合策略。它系统地审查了整个 VisionOcc 管道,并识别出三个基础但以前被忽视的时间线索:场景级一致性、运动校准和几何补充。这些线索捕捉到时间演变的各个方面,在 VisionOcc 框架的不同模块中做出了独特的贡献。 为了有效融合异构表示中的时序信号,我们提出了一种新的融合策略,通过重新解释标准 RNN 的公式来实现这一点。这种重新解读利用了特征上的梯度下降,以统一不同类型的时序信息的集成,并将提出的时序线索无缝地嵌入到网络中。 在 nuScenes 数据集上进行的广泛实验表明,GDFusion 显著优于现有的基准方法。特别是在 Occ3D 基准测试上,它实现了 1.4%-4.8% 的 mIoU 改进,并减少了 27%-72% 的内存消耗。
URL
https://arxiv.org/abs/2504.12959