Abstract
With recent developments in Embodied Artificial Intelligence (EAI) research, there has been a growing demand for high-quality, large-scale interactive scene generation. While prior methods in scene synthesis have prioritized the naturalness and realism of the generated scenes, the physical plausibility and interactivity of scenes have been largely left unexplored. To address this disparity, we introduce PhyScene, a novel method dedicated to generating interactive 3D scenes characterized by realistic layouts, articulated objects, and rich physical interactivity tailored for embodied agents. Based on a conditional diffusion model for capturing scene layouts, we devise novel physics- and interactivity-based guidance mechanisms that integrate constraints from object collision, room layout, and object reachability. Through extensive experiments, we demonstrate that PhyScene effectively leverages these guidance functions for physically interactable scene synthesis, outperforming existing state-of-the-art scene synthesis methods by a large margin. Our findings suggest that the scenes generated by PhyScene hold considerable potential for facilitating diverse skill acquisition among agents within interactive environments, thereby catalyzing further advancements in embodied AI research. Project website: this http URL.
Abstract (translated)
随着最近在Embodied人工智能(EAI)研究中的发展,对于高质量、大规模互动场景生成的高需求不断增加。然而,以前的方法在场景合成中过于关注生成场景的自然性和真实性,而场景的物理可行性和互动性却被大大忽视了。为了应对这一差异,我们引入了PhyScene,一种专为身体代理生成具有真实布局、关节和丰富物理交互性的交互式3D场景的新方法。基于条件扩散模型来捕捉场景布局,我们设计了一种新的基于物理和交互性的指导机制,结合了物体碰撞、房间布局和物体可达性等方面的约束。通过大量实验,我们证明了PhyScene有效地利用了这些指导功能进行物理交互式场景生成,在现有状态最先进的方法之上取得了很大的优势。我们的研究结果表明,PhyScene生成的场景在促进交互环境中的代理多样化技能学习方面具有相当大的潜力,从而推动了在身体人工智能研究中的进一步发展。项目网站:此链接。
URL
https://arxiv.org/abs/2404.09465