Abstract
Image-based multi-person reconstruction in wide-field large scenes is critical for crowd analysis and security alert. However, existing methods cannot deal with large scenes containing hundreds of people, which encounter the challenges of large number of people, large variations in human scale, and complex spatial distribution. In this paper, we propose Crowd3D, the first framework to reconstruct the 3D poses, shapes and locations of hundreds of people with global consistency from a single large-scene image. The core of our approach is to convert the problem of complex crowd localization into pixel localization with the help of our newly defined concept, Human-scene Virtual Interaction Point (HVIP). To reconstruct the crowd with global consistency, we propose a progressive reconstruction network based on HVIP by pre-estimating a scene-level camera and a ground plane. To deal with a large number of persons and various human sizes, we also design an adaptive human-centric cropping scheme. Besides, we contribute a benchmark dataset, LargeCrowd, for crowd reconstruction in a large scene. Experimental results demonstrate the effectiveness of the proposed method. The code and datasets will be made public.
Abstract (translated)
在宽广的场景中,基于图像的多人重建对于人群分析和安全警报至关重要。然而,现有的方法无法处理包含数百人的大规模场景,该场景面临多人、人类规模的巨大差异和复杂的空间分布的挑战。在本文中,我们提出了 Crowd3D,是第一个框架,可以从一张大规模场景图像中以全球一致性重建数百人的三维姿势、形状和位置。我们的方法的核心是使用我们新定义的概念——人类场景虚拟交互点(HVIP),将复杂的人群定位问题转换为像素定位问题。为了以全球一致性重建人群,我们提出了基于HVIP的渐进重建网络,并通过预先估计场景级别的相机和地面平面来实现。为了处理大量的人员和不同人类大小的人类,我们还设计了自适应人类中心裁剪方案。此外,我们还为在一个大规模场景中进行人群重建提供了基准数据集LargeCrowd。实验结果显示了该方法的有效性。代码和数据集将公开发布。
URL
https://arxiv.org/abs/2301.09376