Abstract
Synthesizing realistic and diverse indoor 3D scene layouts in a controllable fashion opens up applications in simulated navigation and virtual reality. As concise and robust representations of a scene, scene graphs have proven to be well-suited as the semantic control on the generated layout. We present a variant of the conditional variational autoencoder (cVAE) model to synthesize 3D scenes from scene graphs and floor plans. We exploit the properties of self-attention layers to capture high-level relationships between objects in a scene, and use these as the building blocks of our model. Our model, leverages graph transformers to estimate the size, dimension and orientation of the objects in a room while satisfying relationships in the given scene graph. Our experiments shows self-attention layers leads to sparser (HOW MUCH) and more diverse scenes (HOW MUCH)\. Included in this work, we publish the first large-scale dataset for conditioned scene generation from scene graphs, containing over XXX rooms (of floor plans and scene graphs).
Abstract (translated)
控制生成现实和多样室内3D场景布局是一种可控制的方式为模拟导航和虚拟现实打开了许多应用。作为场景的简洁而强大的表示,场景图已被证明作为对生成的布局的语义控制。我们提出了一种从场景图和 floor plan 合成 3D 场景的条件下变分自编码器(cVAE)模型。我们利用自注意层的特性来捕捉场景中物体之间的高层次关系,并利用这些作为我们模型的构建模块。我们的模型利用图变换器估计场景中物体的大小、维度和方向,同时满足给定场景图中的关系。我们的实验表明,自注意层导致更稀疏(多少)的场景(多少)。本工作中,我们发布了第一个大规模条件场景生成数据集,包含超过XXX个房间(地板图和场景图)。
URL
https://arxiv.org/abs/2404.01887