Paper Reading AI Learner

Multistable Shape from Shading Emerges from Patch Diffusion

2024-05-23 13:15:24
Xinran Nicole Han, Todd Zickler, Ko Nishino

Abstract

Models for monocular shape reconstruction of surfaces with diffuse reflection -- shape from shading -- ought to produce distributions of outputs, because there are fundamental mathematical ambiguities of both continuous (e.g., bas-relief) and discrete (e.g., convex/concave) varieties which are also experienced by humans. Yet, the outputs of current models are limited to point estimates or tight distributions around single modes, which prevent them from capturing these effects. We introduce a model that reconstructs a multimodal distribution of shapes from a single shading image, which aligns with the human experience of multistable perception. We train a small denoising diffusion process to generate surface normal fields from $16\times 16$ patches of synthetic images of everyday 3D objects. We deploy this model patch-wise at multiple scales, with guidance from inter-patch shape consistency constraints. Despite its relatively small parameter count and predominantly bottom-up structure, we show that multistable shape explanations emerge from this model for ''ambiguous'' test images that humans experience as being multistable. At the same time, the model produces veridical shape estimates for object-like images that include distinctive occluding contours and appear less ambiguous. This may inspire new architectures for stochastic 3D shape perception that are more efficient and better aligned with human experience.

Abstract (translated)

模型应为单目表面形状复原--来自阴影的形状,应该产生输出分布,因为连续(例如,浮雕)和离散(例如,凸/凹)数学不确定性也在人类经验中存在。然而,当前模型的输出仅限于单点估计或单点附近的高斯分布,无法捕捉这些效应。我们引入了一个模型,可以从单张阴影图像中重构多模态形状分布,与人类对多稳态感知经验相符。我们训练了一个小去噪扩散过程,从日常3D物体合成图像的16x16个补丁中生成表面法线场。我们以级联方式在多个尺度上部署此模型,并从级联形状一致性约束中进行指导。尽管该模型具有相对较小的参数计数和主要是自下而上结构的特征,但我们证明了在“模糊”测试图像上,该模型可以解释出多稳态形状。同时,该模型也为包含独特遮挡轮廓的物体状图像提供了真实的形状估计,这些图像看起来不太模糊,更符合人类经验。这可能会激发更有效的随机3D形状感知架构,这些架构更高效,更符合人类经验。

URL

https://arxiv.org/abs/2405.14530

PDF

https://arxiv.org/pdf/2405.14530.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot