Abstract
Learning-based isosurface extraction methods have recently emerged as a robust and efficient alternative to axiomatic techniques. However, the vast majority of such approaches rely on supervised training with axiomatically computed ground truths, thus potentially inheriting biases and data artifacts of the corresponding axiomatic methods. Steering away from such dependencies, we propose a self-supervised training scheme for the Neural Dual Contouring meshing framework, resulting in our method: Self-Supervised Dual Contouring (SDC). Instead of optimizing predicted mesh vertices with supervised training, we use two novel self-supervised loss functions that encourage the consistency between distances to the generated mesh up to the first order. Meshes reconstructed by SDC surpass existing data-driven methods in capturing intricate details while being more robust to possible irregularities in the input. Furthermore, we use the same self-supervised training objective linking inferred mesh and input SDF, to regularize the training process of Deep Implicit Networks (DINs). We demonstrate that the resulting DINs produce higher-quality implicit functions, ultimately leading to more accurate and detail-preserving surfaces compared to prior baselines for different input modalities. Finally, we demonstrate that our self-supervised losses improve meshing performance in the single-view reconstruction task by enabling joint training of predicted SDF and resulting output mesh. We open-source our code at this https URL
Abstract (translated)
基于学习的等距面提取方法近年来成为了一种 robust 和 efficient 的替代轴理方法。然而,大多数这样的方法依赖于使用轴理计算的地面真实进行监督训练,从而可能继承相应的轴理方法的偏见和数据噪声。为了避免这种依赖关系,我们提出了一个自监督训练方案 for the Neural Dual Contouring meshing framework,从而得到我们的方法:自监督双曲面(SDC)。我们不是通过监督训练优化预测网格顶点,而是使用两个新的自监督损失函数,鼓励距离生成网格的第一级距离的一致性。由 SDC 重构的网格超越了现有的数据驱动方法,在捕捉复杂细节的同时,对输入的可能的不规则性具有更高的鲁棒性。此外,我们使用相同的自监督训练目标将推断网格和输入 SDF 连接起来,以规范化深度隐含网络(DIN)的训练过程。我们证明了,通过这种方式生产的 DIN 具有更高的内隐函数质量,最终在各种输入模态上实现更准确和细节保留的表面。最后,我们证明了,我们的自监督损失通过允许同时训练预测 SDF 和最终输出网格,提高了单视图重构任务的等距面提取性能。我们将我们的代码公开发布在 this URL。
URL
https://arxiv.org/abs/2405.18131