Abstract
In image-grounded text generation, fine-grained representations of the image are considered to be of paramount importance. Most of the current systems incorporate visual features and textual concepts as a sketch of an image. However, plainly inferred representations are usually undesirable in that they are composed of separate components, the relations of which are elusive. In this work, we aim at representing an image with a set of integrated visual regions and corresponding textual concepts. To this end, we build the Mutual Iterative Attention (MIA) module, which integrates correlated visual features and textual concepts, respectively, by aligning the two modalities. We evaluate the proposed approach on the COCO dataset for image captioning. Extensive experiments show that the refined image representations boost the baseline models by up to 12% in terms of CIDEr, demonstrating that our method is effective and generalizes well to a wide range of models.
Abstract (translated)
在基于图像的文本生成中,图像的细粒度表示被认为是最重要的。当前的大多数系统都将视觉特征和文本概念合并为图像的草图。然而,明确推断的表示通常是不可取的,因为它们是由单独的组件组成的,这些组件之间的关系是难以捉摸的。在这项工作中,我们的目标是用一套完整的视觉区域和相应的文本概念来表示一个图像。为此,我们构建了互迭代注意(MIA)模块,通过对齐这两种模式,分别集成了相关的视觉特征和文本概念。我们评估了COCO数据集图像字幕的建议方法。大量实验表明,改进后的图像表示方法使基线模型在CIDER方面提高了12%,证明了该方法的有效性,并能很好地推广到各种模型中。
URL
https://arxiv.org/abs/1905.06139