Paper Reading AI Learner

Compact Tensor Pooling for Visual Question Answering

2017-06-20 23:55:32
Yang Shi, Tommaso Furlanello, Anima Anandkumar

Abstract

Performing high level cognitive tasks requires the integration of feature maps with drastically different structure. In Visual Question Answering (VQA) image descriptors have spatial structures, while lexical inputs inherently follow a temporal sequence. The recently proposed Multimodal Compact Bilinear pooling (MCB) forms the outer products, via count-sketch approximation, of the visual and textual representation at each spatial location. While this procedure preserves spatial information locally, outer-products are taken independently for each fiber of the activation tensor, and therefore do not include spatial context. In this work, we introduce multi-dimensional sketch ({MD-sketch}), a novel extension of count-sketch to tensors. Using this new formulation, we propose Multimodal Compact Tensor Pooling (MCT) to fully exploit the global spatial context during bilinear pooling operations. Contrarily to MCB, our approach preserves spatial context by directly convolving the MD-sketch from the visual tensor features with the text vector feature using higher order FFT. Furthermore we apply MCT incrementally at each step of the question embedding and accumulate the multi-modal vectors with a second LSTM layer before the final answer is chosen.

Abstract (translated)

执行高级认知任务需要结合完全不同结构的特征映射。在视觉问答(VQA)中,图像描述符具有空间结构,而词汇输入固有地遵循时间序列。最近提出的多模态紧凑双线性池(MCB)通过计数草图近似形成了在每个空间位置处的视觉和文本表示的外部产品。虽然这个过程在本地保存空间信息,但是对于活化张量的每根纤维独立地采用外积,因此不包括空间上下文。在这项工作中,我们介绍了多维素描({MD-sketch}),这是计数素描到张量的一种新颖的扩展。使用这种新的公式,我们提出了多模式紧张张量池(MCT),以在双线性池化操作期间充分利用全球空间环境。与MCB相反,我们的方法通过使用高阶FFT将视觉张量特征的MD-草图与文本矢量特征直接卷积来保存空间上下文。此外,我们在问题嵌入的每一步递增地应用MCT,并在选择最终答案之前用第二LSTM层累积多模式向量。

URL

https://arxiv.org/abs/1706.06706

PDF

https://arxiv.org/pdf/1706.06706.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot