Abstract
Diffusion models generating images conditionally on text, such as Dall-E 2 and Stable Diffusion, have recently made a splash far beyond the computer vision community. Here, we tackle the related problem of generating point clouds, both unconditionally, and conditionally with images. For the latter, we introduce a novel geometrically-motivated conditioning scheme based on projecting sparse image features into the point cloud and attaching them to each individual point, at every step in the denoising process. This approach improves geometric consistency and yields greater fidelity than current methods relying on unstructured, global latent codes. Additionally, we show how to apply recent continuous-time diffusion schemes. Our method performs on par or above the state of art on conditional and unconditional experiments on synthetic data, while being faster, lighter, and delivering tractable likelihoods. We show it can also scale to diverse indoors scenes.
Abstract (translated)
条件生成图像的扩散模型,如Dall-E 2和稳定扩散,最近在计算机视觉社区以外引起了轰动。在这里,我们解决了生成无条件和无条件与图像点云相关的问题。对于后者,我们提出了一种基于几何 motivated 的预处理方案,通过将稀疏的图像特征投影到点云上并将其附加到每个个体点,在去噪过程中每个步骤进行。这种方法改善了几何一致性并比当前基于无结构的 global 隐编码的方法生成更高的逼真度。此外,我们展示了如何应用最近的连续时间扩散方案。我们的方法在条件性和无条件的实验中表现与当前最先进的方法相当或更高,同时更快、更轻且提供了可处理的概率。我们展示了它也可以应用于多种室内场景。
URL
https://arxiv.org/abs/2303.05916