Paper Reading AI Learner

Stacked Cross Attention for Image-Text Matching

2018-07-23 04:41:57
Kuang-Huei Lee, Xi Chen, Gang Hua, Houdong Hu, Xiaodong He

Abstract

In this paper, we study the problem of image-text matching. Inferring the latent semantic alignment between objects or other salient stuff (e.g. snow, sky, lawn) and the corresponding words in sentences allows to capture fine-grained interplay between vision and language, and makes image-text matching more interpretable. Prior work either simply aggregates the similarity of all possible pairs of regions and words without attending differentially to more and less important words or regions, or uses a multi-step attentional process to capture limited number of semantic alignments which is less interpretable. In this paper, we present Stacked Cross Attention to discover the full latent alignments using both image regions and words in a sentence as context and infer image-text similarity. Our approach achieves the state-of-the-art results on the MS-COCO and Flickr30K datasets. On Flickr30K, our approach outperforms the current best methods by 22.1% relatively in text retrieval from image query, and 18.2% relatively in image retrieval with text query (based on Recall@1). On MS-COCO, our approach improves sentence retrieval by 17.8% relatively and image retrieval by 16.6% relatively (based on Recall@1 using the 5K test set). Code has been made available at: https://github.com/kuanghuei/SCAN.

Abstract (translated)

在本文中,我们研究图像文本匹配的问题。推断对象或其他显着内容(例如雪,天空,草坪)之间的潜在语义对齐以及句子中的对应单词允许捕获视觉和语言之间的细粒度相互作用,并使图像 - 文本匹配更易于解释。先前的工作或者简单地聚合所有可能的区域和单词对的相似性而不区别地对较多和不太重要的单词或区域进行区分,或者使用多步骤注意过程来捕获不太可解释的有限数量的语义对齐。在本文中,我们提出Stacked Cross Attention发现完整的潜在对齐,使用图像区域和句子中的单词作为上下文并推断图像 - 文本相似性。我们的方法在MS-COCO和Flickr30K数据集上实现了最先进的结果。在Flickr30K上,我们的方法在图像查询的文本检索中优于当前最佳方法22.1%,在文本查询的图像检索中相对18.2%(基于Recall @ 1)。在MS-COCO上,我们的方法相对提高了句子检索率17.8%,相对于图像检索提高了16.6%(基于Recall @ 1使用5K测试集)。代码已在以下网址获得:https://github.com/kuanghuei/SCAN。

URL

https://arxiv.org/abs/1803.08024

PDF

https://arxiv.org/pdf/1803.08024.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot