Abstract
Performing high level cognitive tasks requires the integration of feature maps with drastically different structure. In Visual Question Answering (VQA) image descriptors have spatial structures, while lexical inputs inherently follow a temporal sequence. The recently proposed Multimodal Compact Bilinear pooling (MCB) forms the outer products, via count-sketch approximation, of the visual and textual representation at each spatial location. While this procedure preserves spatial information locally, outer-products are taken independently for each fiber of the activation tensor, and therefore do not include spatial context. In this work, we introduce multi-dimensional sketch ({MD-sketch}), a novel extension of count-sketch to tensors. Using this new formulation, we propose Multimodal Compact Tensor Pooling (MCT) to fully exploit the global spatial context during bilinear pooling operations. Contrarily to MCB, our approach preserves spatial context by directly convolving the MD-sketch from the visual tensor features with the text vector feature using higher order FFT. Furthermore we apply MCT incrementally at each step of the question embedding and accumulate the multi-modal vectors with a second LSTM layer before the final answer is chosen.
Abstract (translated)
执行高级认知任务需要结合完全不同结构的特征映射。在视觉问答(VQA)中,图像描述符具有空间结构,而词汇输入固有地遵循时间序列。最近提出的多模态紧凑双线性池(MCB)通过计数草图近似形成了在每个空间位置处的视觉和文本表示的外部产品。虽然这个过程在本地保存空间信息,但是对于活化张量的每根纤维独立地采用外积,因此不包括空间上下文。在这项工作中,我们介绍了多维素描({MD-sketch}),这是计数素描到张量的一种新颖的扩展。使用这种新的公式,我们提出了多模式紧张张量池(MCT),以在双线性池化操作期间充分利用全球空间环境。与MCB相反,我们的方法通过使用高阶FFT将视觉张量特征的MD-草图与文本矢量特征直接卷积来保存空间上下文。此外,我们在问题嵌入的每一步递增地应用MCT,并在选择最终答案之前用第二LSTM层累积多模式向量。
URL
https://arxiv.org/abs/1706.06706