Abstract
Visual reasoning is a special visual question answering problem that is multi-step and compositional by nature, and also requires intensive text-vision interactions. We propose CMM: Cascaded Mutual Modulation as a novel end-to-end visual reasoning model. CMM includes a multi-step comprehension process for both question and image. In each step, we use a Feature-wise Linear Modulation (FiLM) technique to enable textual/visual pipeline to mutually control each other. Experiments show that CMM significantly outperforms most related models, and reach state-of-the-arts on two visual reasoning benchmarks: CLEVR and NLVR, collected from both synthetic and natural languages. Ablation studies confirm that both our multistep framework and our visual-guided language modulation are critical to the task. Our code is available at https://github.com/FlamingHorizon/CMM-VR.
Abstract (translated)
视觉推理是一种特殊的视觉问题回答问题,其本质上是多步骤和组合的,并且还需要密集的文本 - 视觉交互。我们提出CMM:Cascaded Mutual Modulation作为一种新颖的端到端视觉推理模型。 CMM包括问题和图像的多步骤理解过程。在每个步骤中,我们使用特征线性调制(FiLM)技术使文本/视觉管道相互控制。实验表明,CMM明显优于大多数相关模型,并且在两种视觉推理基准上达到了最新水平:CLEVR和NLVR,从合成语言和自然语言中收集。消融研究证实,我们的多步骤框架和我们的视觉引导语言调制对于任务至关重要。我们的代码可在https://github.com/FlamingHorizon/CMM-VR获得。
URL
https://arxiv.org/abs/1809.01943