Abstract
Physical reasoning is important for effective robot manipulation. Recent work has investigated both vision and language modalities for physical reasoning; vision can reveal information about objects in the environment and language serves as an abstraction and communication medium for additional context. Although these works have demonstrated success on a variety of physical reasoning tasks, they are limited to physical properties that can be inferred from visual or language inputs. In this work, we investigate combining tactile perception with language, which enables embodied systems to obtain physical properties through interaction and apply common-sense reasoning. We contribute a new dataset PhysiCleAR, which comprises both physical/property reasoning tasks and annotated tactile videos obtained using a GelSight tactile sensor. We then introduce Octopi, a system that leverages both tactile representation learning and large vision-language models to predict and reason about tactile inputs with minimal language fine-tuning. Our evaluations on PhysiCleAR show that Octopi is able to effectively use intermediate physical property predictions to improve physical reasoning in both trained tasks and for zero-shot reasoning. PhysiCleAR and Octopi are available on this https URL.
Abstract (translated)
物理推理对于有效机器人操作非常重要。最近的工作调查了物理推理中的视觉和语言模态;视觉可以揭示环境中物体的信息,而语言则作为附加上下文和通信媒介。尽管这些工作在各种物理推理任务上取得了成功,但它们仅限于可以从视觉或语言输入中推断出的物理属性。在这项工作中,我们研究了通过结合触觉感知和语言,使实体系统通过交互获得物理属性并应用常识推理。我们贡献了一个新的数据集 PhysiCleAR,该数据集包括物理/属性推理任务和通过GelSight 触觉传感器获得的带注释的触觉视频。然后我们引入了 Octopi 系统,该系统利用触觉表示学习和大型视觉-语言模型来预测和推理关于触觉输入的物理属性,最小化语言微调。我们对 PhysiCleAR 的评估显示,Octopi 能够有效利用中间物理属性预测来改进训练任务和零散推理。 PhysiCleAR 和 Octopi 可以在该 https URL 上找到。
URL
https://arxiv.org/abs/2405.02794