Abstract
We introduce SAOR, a novel approach for estimating the 3D shape, texture, and viewpoint of an articulated object from a single image captured in the wild. Unlike prior approaches that rely on pre-defined category-specific 3D templates or tailored 3D skeletons, SAOR learns to articulate shapes from single-view image collections with a skeleton-free part-based model without requiring any 3D object shape priors. To prevent ill-posed solutions, we propose a cross-instance consistency loss that exploits disentangled object shape deformation and articulation. This is helped by a new silhouette-based sampling mechanism to enhance viewpoint diversity during training. Our method only requires estimated object silhouettes and relative depth maps from off-the-shelf pre-trained networks during training. At inference time, given a single-view image, it efficiently outputs an explicit mesh representation. We obtain improved qualitative and quantitative results on challenging quadruped animals compared to relevant existing work.
Abstract (translated)
我们介绍了 SAOR,一种从野生图像中估计具有关节的物体的三维形状、纹理和视角的新方法。与之前的方法依赖于预先定义的特定类别3D模板或定制的3D骨骼,SAOR使用无骨骼个体模型从单个视角图像集中学习关节形状,而不需要任何3D物体形状先验。为了防止不整的解决方案,我们提出了一种交叉实例一致性损失,利用分离物体形状变形和关节。这通过一种新的轮廓采样机制在训练期间增强视角多样性。在我们的方法中,只需要在训练期间从现有的预训练网络估计物体轮廓和相对深度图。在推理期间,给定单个视角图像,它高效输出明确的网格表示。与相关的现有工作相比,我们对挑战性的四足动物在质量和数量方面取得了改进的结果。
URL
https://arxiv.org/abs/2303.13514