Paper Reading AI Learner

Octopi: Object Property Reasoning with Large Tactile-Language Models

2024-05-05 02:22:11
Samson Yu, Kelvin Lin, Anxing Xiao, Jiafei Duan, Harold Soh

Abstract

Physical reasoning is important for effective robot manipulation. Recent work has investigated both vision and language modalities for physical reasoning; vision can reveal information about objects in the environment and language serves as an abstraction and communication medium for additional context. Although these works have demonstrated success on a variety of physical reasoning tasks, they are limited to physical properties that can be inferred from visual or language inputs. In this work, we investigate combining tactile perception with language, which enables embodied systems to obtain physical properties through interaction and apply common-sense reasoning. We contribute a new dataset PhysiCleAR, which comprises both physical/property reasoning tasks and annotated tactile videos obtained using a GelSight tactile sensor. We then introduce Octopi, a system that leverages both tactile representation learning and large vision-language models to predict and reason about tactile inputs with minimal language fine-tuning. Our evaluations on PhysiCleAR show that Octopi is able to effectively use intermediate physical property predictions to improve physical reasoning in both trained tasks and for zero-shot reasoning. PhysiCleAR and Octopi are available on this https URL.

Abstract (translated)

物理推理对于有效机器人操作非常重要。最近的工作调查了物理推理中的视觉和语言模态;视觉可以揭示环境中物体的信息,而语言则作为附加上下文和通信媒介。尽管这些工作在各种物理推理任务上取得了成功,但它们仅限于可以从视觉或语言输入中推断出的物理属性。在这项工作中,我们研究了通过结合触觉感知和语言,使实体系统通过交互获得物理属性并应用常识推理。我们贡献了一个新的数据集 PhysiCleAR,该数据集包括物理/属性推理任务和通过GelSight 触觉传感器获得的带注释的触觉视频。然后我们引入了 Octopi 系统,该系统利用触觉表示学习和大型视觉-语言模型来预测和推理关于触觉输入的物理属性,最小化语言微调。我们对 PhysiCleAR 的评估显示,Octopi 能够有效利用中间物理属性预测来改进训练任务和零散推理。 PhysiCleAR 和 Octopi 可以在该 https URL 上找到。

URL

https://arxiv.org/abs/2405.02794

PDF

https://arxiv.org/pdf/2405.02794.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot