Abstract
We propose the Unified Visual-Semantic Embeddings (Unified VSE) for learning a joint space of visual representation and textual semantics. The model unifies the embeddings of concepts at different levels: objects, attributes, relations, and full scenes. We view the sentential semantics as a combination of different semantic components such as objects and relations; their embeddings are aligned with different image regions. A contrastive learning approach is proposed for the effective learning of this fine-grained alignment from only image-caption pairs. We also present a simple yet effective approach that enforces the coverage of caption embeddings on the semantic components that appear in the sentence. We demonstrate that the Unified VSE outperforms baselines on cross-modal retrieval tasks; the enforcement of the semantic coverage improves the model's robustness in defending text-domain adversarial attacks. Moreover, our model empowers the use of visual cues to accurately resolve word dependencies in novel sentences.
Abstract (translated)
我们提出了统一的视觉语义嵌入(UnifiedVSE),用于学习视觉表示和文本语义的联合空间。该模型统一了不同层次的概念嵌入:对象、属性、关系和完整场景。我们把句子语义看作是对象和关系等不同语义成分的组合,它们的嵌入与不同的图像区域对齐。提出了一种对比学习方法,用于从图像标题对中有效地学习这种细粒度对齐。我们还提供了一种简单而有效的方法,强制在句子中出现的语义组件上覆盖标题嵌入。我们证明了统一的VSE在跨模式检索任务上优于基线;语义覆盖的实现提高了模型在防御文本域对抗攻击中的鲁棒性。此外,我们的模型允许使用视觉提示来准确地解决小说句子中的单词依赖性。
URL
https://arxiv.org/abs/1904.05521