Paper Reading AI Learner

DPPE: Dense Pose Estimation in a Plenoxels Environment using Gradient Approximation

2024-03-16 02:22:10
Christopher Kolios, Yeganeh Bahoo, Sajad Saeedi

Abstract

We present DPPE, a dense pose estimation algorithm that functions over a Plenoxels environment. Recent advances in neural radiance field techniques have shown that it is a powerful tool for environment representation. More recent neural rendering algorithms have significantly improved both training duration and rendering speed. Plenoxels introduced a fully-differentiable radiance field technique that uses Plenoptic volume elements contained in voxels for rendering, offering reduced training times and better rendering accuracy, while also eliminating the neural net component. In this work, we introduce a 6-DoF monocular RGB-only pose estimation procedure for Plenoxels, which seeks to recover the ground truth camera pose after a perturbation. We employ a variation on classical template matching techniques, using stochastic gradient descent to optimize the pose by minimizing errors in re-rendering. In particular, we examine an approach that takes advantage of the rapid rendering speed of Plenoxels to numerically approximate part of the pose gradient, using a central differencing technique. We show that such methods are effective in pose estimation. Finally, we perform ablations over key components of the problem space, with a particular focus on image subsampling and Plenoxel grid resolution. Project website: this https URL

Abstract (translated)

我们提出了DPPE,一种在Plenoxels环境中进行稠密姿态估计的算法。最近在神经辐射场技术方面的进展表明,这是一种强大的环境表示工具。更近期的神经渲染算法显著提高了训练时间和渲染速度。Plenoxels引入了一种完全可导的辐射场技术,利用Plenoptic体积元素(voxels)中的光子进行渲染,从而降低了训练时间并提高了渲染准确性,同时消除了神经网络组件。在这篇工作中,我们为Plenoxels引入了一种6DoF单目RGB-only姿态估计方法,旨在在扰动后恢复真实相机姿态。我们采用了一种基于随机梯度下降的优化方法,通过最小化重新渲染中的误差来优化姿态。特别地,我们研究了一种利用Plenoxels快速渲染速度采用离散差分技术数值近似的姿态梯度的方法。我们证明了这种方法在姿态估计方面是有效的。最后,我们对问题空间的关键组件进行平滑处理,特别关注图像子采样和Plenoxel网格分辨率。项目网站:https://this URL

URL

https://arxiv.org/abs/2403.10773

PDF

https://arxiv.org/pdf/2403.10773.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot