Abstract
We present a novel optimization algorithm called DroNeRF for the autonomous positioning of monocular camera drones around an object for real-time 3D reconstruction using only a few images. Neural Radiance Fields or NeRF, is a novel view synthesis technique used to generate new views of an object or scene from a set of input images. Using drones in conjunction with NeRF provides a unique and dynamic way to generate novel views of a scene, especially with limited scene capabilities of restricted movements. Our approach focuses on calculating optimized pose for individual drones while solely depending on the object geometry without using any external localization system. The unique camera positioning during the data-capturing phase significantly impacts the quality of the 3D model. To evaluate the quality of our generated novel views, we compute different perceptual metrics like the Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure(SSIM). Our work demonstrates the benefit of using an optimal placement of various drones with limited mobility to generate perceptually better results.
Abstract (translated)
我们提出了一种名为 DroNeRF 的新优化算法,用于在对象周围自主定位单目相机无人机,并使用仅几个图像进行实时三维重建。NeRF 是一种新的视角合成技术,用于从一组输入图像中生成新的视角视图,并将其与无人机结合使用提供一种独特且动态的方式来生成场景的新视角,特别是当场景能力受到限制且只能进行有限运动时。我们的方法重点是计算每个无人机的最佳姿态,而仅依赖于物体几何结构,而无需使用任何外部定位系统。在数据捕捉阶段,独特的相机位置对三维模型的质量产生了显著的影响。为了评估我们生成新视角的质量,我们计算不同的感知度量,例如峰值信号到噪声比 (PSNR) 和结构相似性指数测量 (SSIM)。我们的工作展示了使用具有有限移动能力的多种无人机的最佳位置产生感知上更好的结果的益处。
URL
https://arxiv.org/abs/2303.04322