Abstract
Toward infinite-scale 3D city synthesis, we propose a novel framework, InfiniCity, which constructs and renders an unconstrainedly large and 3D-grounded environment from random noises. InfiniCity decomposes the seemingly impractical task into three feasible modules, taking advantage of both 2D and 3D data. First, an infinite-pixel image synthesis module generates arbitrary-scale 2D maps from the bird's-eye view. Next, an octree-based voxel completion module lifts the generated 2D map to 3D octrees. Finally, a voxel-based neural rendering module texturizes the voxels and renders 2D images. InfiniCity can thus synthesize arbitrary-scale and traversable 3D city environments, and allow flexible and interactive editing from users. We quantitatively and qualitatively demonstrate the efficacy of the proposed framework. Project page: this https URL
Abstract (translated)
向无限尺度的3D城市合成提出一个全新的框架——InfiniCity,该框架从随机噪声中构建和渲染任意规模、三维绑定的环境。InfiniCity将看似不可能的任务分解成三个可行的模块,利用2D和3D数据。首先,一个无限像素图像合成模块从俯瞰视角生成任意大小的2D地图。其次,一个基于octree的立方点 completion 模块将生成的2D地图转换为3D octree。最后,一个基于立方点神经网络渲染模块将立方点纹理化并生成2D图像。InfiniCity因此合成任意规模和可访问的3D城市环境,并从用户那里提供灵活和交互式的编辑。我们量化和定性地证明了所提出框架的有效性。项目页面: this https URL
URL
https://arxiv.org/abs/2301.09637