Abstract
We present a novel problem setting in zero-shot learning, zero-shot object recognition and detection in the context. Contrary to the traditional zero-shot learning methods, which simply infers unseen categories by transferring knowledge from the objects belonging to semantically similar seen categories, we aim to understand the identity of the novel objects in an image surrounded by the known objects using the inter-object relation prior. Specifically, we leverage the visual context and the geometric relationships between all pairs of objects in a single image, and capture the information useful to infer unseen categories. We integrate our context-aware zero-shot learning framework into the traditional zero-shot learning techniques seamlessly using a Conditional Random Field (CRF). The proposed algorithm is evaluated on both zero-shot region classification and zero-shot detection tasks. The results on Visual Genome (VG) dataset show that our model significantly boosts performance with the additional visual context compared to traditional methods.
Abstract (translated)
提出了一种新的零镜头学习、零镜头目标识别和检测的问题设置方法。与传统的零镜头学习方法不同,这种方法只是通过从语义相似的视觉类别的物体中转移知识来推断未知的类别,我们的目的是利用物体间的先验关系来了解已知物体所包围的图像中新物体的同一性。具体地说,我们利用视觉上下文和单个图像中所有对象对之间的几何关系,并捕获有用的信息来推断看不见的类别。我们使用条件随机字段(CRF)将我们的上下文感知零镜头学习框架无缝集成到传统零镜头学习技术中。该算法对零镜头区域分类和零镜头检测任务进行了评价。视觉基因组(vg)数据集的结果表明,与传统方法相比,我们的模型在额外的视觉环境下显著提高了性能。
URL
https://arxiv.org/abs/1904.09320