Abstract
We propose an architecture for VQA which utilizes recurrent layers to generate visual and textual attention. The memory characteristic of the proposed recurrent attention units offers a rich joint embedding of visual and textual features and enables the model to reason relations between several parts of the image and question. Our single model outperforms the first place winner on the VQA 1.0 dataset, performs within margin to the current state-of-the-art ensemble model. We also experiment with replacing attention mechanisms in other state-of-the-art models with our implementation and show increased accuracy. In both cases, our recurrent attention mechanism improves performance in tasks requiring sequential or relational reasoning on the VQA dataset.
Abstract (translated)
我们提出了一种VQA架构,它利用复发层来产生视觉和文本的关注。所提出的经常性关注单元的记忆特征提供了丰富的视觉和文本特征的联合嵌入,并且使该模型能够推理图像的几个部分和问题之间的关系。我们的单一模型胜过VQA 1.0数据集的第一名获胜者,在当前最先进的综合模型的保证范围内执行。我们还尝试用我们的实施替换其他最先进模型中的关注机制,并显示更高的准确性。在这两种情况下,我们的经常性关注机制都会提高需要对VQA数据集进行顺序或关系推理的任务的性能。
URL
https://arxiv.org/abs/1802.00209