Abstract
The scale diversity of point cloud data presents significant challenges in developing unified representation learning techniques for 3D vision. Currently, there are few unified 3D models, and no existing pre-training method is equally effective for both object- and scene-level point clouds. In this paper, we introduce UniPre3D, the first unified pre-training method that can be seamlessly applied to point clouds of any scale and 3D models of any architecture. Our approach predicts Gaussian primitives as the pre-training task and employs differentiable Gaussian splatting to render images, enabling precise pixel-level supervision and end-to-end optimization. To further regulate the complexity of the pre-training task and direct the model's focus toward geometric structures, we integrate 2D features from pre-trained image models to incorporate well-established texture knowledge. We validate the universal effectiveness of our proposed method through extensive experiments across a variety of object- and scene-level tasks, using diverse point cloud models as backbones. Code is available at this https URL.
Abstract (translated)
点云数据的尺度多样性为三维视觉中的统一表示学习技术的发展带来了显著挑战。目前,很少有通用的3D模型存在,并且没有现有的预训练方法能够同时有效地应用于对象级和场景级点云。在本文中,我们介绍了UniPre3D,这是首个可以无缝应用于任何规模点云及任意架构3D模型的统一预训练方法。我们的方法将预测高斯基元作为预训练任务,并采用可微分高斯渲染技术来生成图像,从而实现精确的像素级监督和端到端优化。为了进一步调节预训练任务的复杂度并引导模型关注几何结构,我们整合了来自预先训练好的图像模型的2D特征,以纳入已确立的良好纹理知识。我们通过广泛的实验验证了所提出方法在各种对象级和场景级任务中的通用有效性,并使用多种点云模型作为骨干网络进行测试。代码可在提供的链接中获取。
URL
https://arxiv.org/abs/2506.09952