Abstract
Visual Question Answering (VQA) has emerged as a highly engaging field in recent years, attracting increasing research efforts aiming to enhance VQA accuracy through the deployment of advanced models such as Transformers. Despite this growing interest, there has been limited exploration into the comparative analysis and impact of textual modalities within VQA, particularly in terms of model complexity and its effect on performance. In this work, we conduct a comprehensive comparison between complex textual models that leverage long dependency mechanisms and simpler models focusing on local textual features within a well-established VQA framework. Our findings reveal that employing complex textual encoders is not invariably the optimal approach for the VQA-v2 dataset. Motivated by this insight, we introduce an improved model, ConvGRU, which incorporates convolutional layers to enhance the representation of question text. Tested on the VQA-v2 dataset, ConvGRU achieves better performance without substantially increasing parameter complexity.
Abstract (translated)
视觉问题回答(VQA)近年来成为了一个高度有趣的领域,吸引了越来越多的研究努力,通过部署先进的模型如Transformer来提高VQA的准确性。尽管如此,对VQA中文本模式的比较分析和影响的研究仍然有限,特别是在模型复杂性和其对性能的影响方面。在这项工作中,我们全面比较了在VQA框架内利用长距离依赖机制的复杂文本模型和关注局部文本特征的简单模型的性能。我们的研究结果表明,使用复杂的文本编码器并不一定是最优策略,尤其是在VQA-v2数据集上。为了应对这一见解,我们引入了一个改进的模型ConvGRU,它通过添加卷积层来增强问题文本的表示。在VQA-v2数据集上进行测试,ConvGRU实现了与参数复杂度相当的好性能。
URL
https://arxiv.org/abs/2405.00479