Paper Reading AI Learner

ShapeGlot: Learning Language for Shape Differentiation

2019-05-08 06:01:33
Panos Achlioptas, Judy Fan, Robert X.D. Hawkins, Noah D. Goodman, Leonidas J. Guibas

Abstract

In this work we explore how fine-grained differences between the shapes of common objects are expressed in language, grounded on images and 3D models of the objects. We first build a large scale, carefully controlled dataset of human utterances that each refers to a 2D rendering of a 3D CAD model so as to distinguish it from a set of shape-wise similar alternatives. Using this dataset, we develop neural language understanding (listening) and production (speaking) models that vary in their grounding (pure 3D forms via point-clouds vs. rendered 2D images), the degree of pragmatic reasoning captured (e.g. speakers that reason about a listener or not), and the neural architecture (e.g. with or without attention). We find models that perform well with both synthetic and human partners, and with held out utterances and objects. We also find that these models are amenable to zero-shot transfer learning to novel object classes (e.g. transfer from training on chairs to testing on lamps), as well as to real-world images drawn from furniture catalogs. Lesion studies indicate that the neural listeners depend heavily on part-related words and associate these words correctly with visual parts of objects (without any explicit network training on object parts), and that transfer to novel classes is most successful when known part-words are available. This work illustrates a practical approach to language grounding, and provides a case study in the relationship between object shape and linguistic structure when it comes to object differentiation.

Abstract (translated)

在这项工作中,我们将探讨如何基于对象的图像和三维模型,用语言表达常见对象形状之间的细微差异。我们首先构建了一个大规模的、精心控制的人类话语数据集,每个数据集都涉及3D CAD模型的二维呈现,以便将其与一组形状相似的替代方案区分开来。利用这个数据集,我们开发了神经语言理解(听)和生产(说)模型,它们的基础不同(通过点云的纯三维形式与渲染的二维图像),捕捉到的语用推理程度(例如,说话者是否有听者的原因),以及神经架构(例如,有或没有注意)。我们发现模型在人工合成和人类合作伙伴中都表现良好,并且具有突出的话语和对象。我们还发现,这些模型可以适应零镜头转移学习到新的对象类(例如,从椅子上的训练转移到灯上的测试),以及从家具目录中绘制的现实世界图像。损伤研究表明,神经听者在很大程度上依赖于部分相关词,并将这些词正确地与对象的视觉部分相关联(没有对对象部分进行任何明确的网络训练),并且当已知部分词可用时,转移到新类是最成功的。本文阐述了一种实用的语言基础研究方法,并对物体形状与语言结构在物体区分方面的关系进行了实例研究。

URL

https://arxiv.org/abs/1905.02925

PDF

https://arxiv.org/pdf/1905.02925.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot