Abstract
We propose a generalized class of multimodal fusion operators for the task of visual question answering (VQA). We identify generalizations of existing multimodal fusion operators based on the Hadamard product, and show that specific non-trivial instantiations of this generalized fusion operator exhibit superior performance in terms of OpenEnded accuracy on the VQA task. In particular, we introduce Nonlinearity Ensembling, Feature Gating, and post-fusion neural network layers as fusion operator components, culminating in an absolute percentage point improvement of $1.1\%$ on the VQA 2.0 test-dev set over baseline fusion operators, which use the same features as input. We use our findings as evidence that our generalized class of fusion operators could lead to the discovery of even superior task-specific operators when used as a search space in an architecture search over fusion operators.
Abstract (translated)
我们提出了一种用于视觉问答(VQA)任务的广义类多模态融合算子。我们确定了基于Hadamard产品的现有多模态融合算子的泛化,并且证明了这个广义融合算子的具体非平凡实例在VQA任务中的开放逼近精度方面表现出优越的性能。特别是,我们引入了非线性集成,特征选通和融合后神经网络层作为融合算子组件,最终使基线融合算子的VQA 2.0测试开发集的绝对百分点提升为1.1%与输入相同的功能。我们使用我们的研究结果作为证据,证明我们的广义类别融合算子可以导致甚至优于任务特定算子的发现,当用作融合算子架构搜索的搜索空间时。
URL
https://arxiv.org/abs/1803.09374