Paper Reading AI Learner

Self-Supervised Dual Contouring

2024-05-28 12:44:28
Ramana Sundararaman, Roman Klokov, Maks Ovsjanikov

Abstract

Learning-based isosurface extraction methods have recently emerged as a robust and efficient alternative to axiomatic techniques. However, the vast majority of such approaches rely on supervised training with axiomatically computed ground truths, thus potentially inheriting biases and data artifacts of the corresponding axiomatic methods. Steering away from such dependencies, we propose a self-supervised training scheme for the Neural Dual Contouring meshing framework, resulting in our method: Self-Supervised Dual Contouring (SDC). Instead of optimizing predicted mesh vertices with supervised training, we use two novel self-supervised loss functions that encourage the consistency between distances to the generated mesh up to the first order. Meshes reconstructed by SDC surpass existing data-driven methods in capturing intricate details while being more robust to possible irregularities in the input. Furthermore, we use the same self-supervised training objective linking inferred mesh and input SDF, to regularize the training process of Deep Implicit Networks (DINs). We demonstrate that the resulting DINs produce higher-quality implicit functions, ultimately leading to more accurate and detail-preserving surfaces compared to prior baselines for different input modalities. Finally, we demonstrate that our self-supervised losses improve meshing performance in the single-view reconstruction task by enabling joint training of predicted SDF and resulting output mesh. We open-source our code at this https URL

Abstract (translated)

基于学习的等距面提取方法近年来成为了一种 robust 和 efficient 的替代轴理方法。然而,大多数这样的方法依赖于使用轴理计算的地面真实进行监督训练,从而可能继承相应的轴理方法的偏见和数据噪声。为了避免这种依赖关系,我们提出了一个自监督训练方案 for the Neural Dual Contouring meshing framework,从而得到我们的方法:自监督双曲面(SDC)。我们不是通过监督训练优化预测网格顶点,而是使用两个新的自监督损失函数,鼓励距离生成网格的第一级距离的一致性。由 SDC 重构的网格超越了现有的数据驱动方法,在捕捉复杂细节的同时,对输入的可能的不规则性具有更高的鲁棒性。此外,我们使用相同的自监督训练目标将推断网格和输入 SDF 连接起来,以规范化深度隐含网络(DIN)的训练过程。我们证明了,通过这种方式生产的 DIN 具有更高的内隐函数质量,最终在各种输入模态上实现更准确和细节保留的表面。最后,我们证明了,我们的自监督损失通过允许同时训练预测 SDF 和最终输出网格,提高了单视图重构任务的等距面提取性能。我们将我们的代码公开发布在 this URL。

URL

https://arxiv.org/abs/2405.18131

PDF

https://arxiv.org/pdf/2405.18131.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot