Abstract
In this paper, we establish a benchmark for table visual question answering, referred to as the TableVQA-Bench, derived from pre-existing table question-answering (QA) and table structure recognition datasets. It is important to note that existing datasets have not incorporated images or QA pairs, which are two crucial components of TableVQA. As such, the primary objective of this paper is to obtain these necessary components. Specifically, images are sourced either through the application of a \textit{stylesheet} or by employing the proposed table rendering system. QA pairs are generated by exploiting the large language model (LLM) where the input is a text-formatted table. Ultimately, the completed TableVQA-Bench comprises 1,500 QA pairs. We comprehensively compare the performance of various multi-modal large language models (MLLMs) on TableVQA-Bench. GPT-4V achieves the highest accuracy among commercial and open-sourced MLLMs from our experiments. Moreover, we discover that the number of vision queries plays a significant role in TableVQA performance. To further analyze the capabilities of MLLMs in comparison to their LLM backbones, we investigate by presenting image-formatted tables to MLLMs and text-formatted tables to LLMs, respectively. Our findings suggest that processing visual inputs is more challenging than text inputs, as evidenced by the lower performance of MLLMs, despite generally requiring higher computational costs than LLMs. The proposed TableVQA-Bench and evaluation codes are available at \href{this https URL}{this https URL}.
Abstract (translated)
在本文中,我们建立了一个名为TableVQA-Bench的基准,该基准是从现有的表格问题回答(QA)和表格结构识别数据集中派生的。值得注意的是,现有的数据集没有包含图像或QA对,这两是表格问题回答的核心组成部分。因此,本文的主要目标是为了获得必要的组成部分。具体来说,图像是通过应用样式表或使用所提出的表格渲染系统来获得的。QA对是通过利用大型语言模型(LLM)生成的。最终,完成的TableVQA-Bench包括1,500个QA对。我们全面比较了各种多模态大型语言模型(MLLMs)在TableVQA-Bench上的性能。GPT-4V在实验中实现了最高精度,这是从我们的实验中商业和开源MLLM中的最高精度。此外,我们发现,在TableVQA性能中,视觉查询的数量对性能有很大的影响。为了进一步分析大型语言模型与LLM后端的性能差异,我们通过将图像格式表格和文本格式表格分别提供给MLLMs和LLMs进行了研究。我们的研究结果表明,处理视觉输入比处理文本输入更具挑战性,这可以从MLLM的较低性能中看出,尽管通常需要比LLM更高的计算成本。所提出的TableVQA-Bench和评估代码可在此处访问:<https://this https URL>
URL
https://arxiv.org/abs/2404.19205