Paper Reading AI Learner

Sparse multi-view hand-object reconstruction for unseen environments

2024-05-02 15:01:25
Yik Lung Pang, Changjae Oh, Andrea Cavallaro

Abstract

Recent works in hand-object reconstruction mainly focus on the single-view and dense multi-view settings. On the one hand, single-view methods can leverage learned shape priors to generalise to unseen objects but are prone to inaccuracies due to occlusions. On the other hand, dense multi-view methods are very accurate but cannot easily adapt to unseen objects without further data collection. In contrast, sparse multi-view methods can take advantage of the additional views to tackle occlusion, while keeping the computational cost low compared to dense multi-view methods. In this paper, we consider the problem of hand-object reconstruction with unseen objects in the sparse multi-view setting. Given multiple RGB images of the hand and object captured at the same time, our model SVHO combines the predictions from each view into a unified reconstruction without optimisation across views. We train our model on a synthetic hand-object dataset and evaluate directly on a real world recorded hand-object dataset with unseen objects. We show that while reconstruction of unseen hands and objects from RGB is challenging, additional views can help improve the reconstruction quality.

Abstract (translated)

近年来,手物体重建主要集中在单视图和密集多视图设置。一方面,单视图方法可以利用学习到的形状先验来推广到未见到的物体,但由于遮挡而存在误差。另一方面,密集多视图方法非常准确,但需要进一步的数据收集才能适应未见到的物体。相比之下,稀疏多视图方法可以利用附加的视图来解决遮挡问题,而 computational cost 较低,相对于密集多视图方法。在本文中,我们考虑在稀疏多视图设置下处理未见物体的问题。 给定同时捕捉到同一手和物体的多个 RGB 图像,我们的模型 SVHO 将每个视图的预测合并为统一的重建,无需在视图中进行优化。我们在合成手-物体数据集上训练模型,并直接在真实世界记录的手-物体数据集上进行评估。我们证明了,虽然从 RGB 重建未见的手和物体具有挑战性,但附加视图可以提高重建质量。

URL

https://arxiv.org/abs/2405.01353

PDF

https://arxiv.org/pdf/2405.01353.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot