Abstract
Visual Question Answering (VQA) is a complicated task that requires the capability of simultaneously processing natural language and images. Initially, this task was researched, focusing on methods to help machines understand objects and scene contexts in images. However, some text appearing in the image that carries explicit information about the full content of the image is not mentioned. Along with the continuous development of the AI era, there have been many studies on the reading comprehension ability of VQA models in the world. As a developing country, conditions are still limited, and this task is still open in Vietnam. Therefore, we introduce the first large-scale dataset in Vietnamese specializing in the ability to understand text appearing in images, we call it ViTextVQA (\textbf{Vi}etnamese \textbf{Text}-based \textbf{V}isual \textbf{Q}uestion \textbf{A}nswering dataset) which contains \textbf{over 16,000} images and \textbf{over 50,000} questions with answers. Through meticulous experiments with various state-of-the-art models, we uncover the significance of the order in which tokens in OCR text are processed and selected to formulate answers. This finding helped us significantly improve the performance of the baseline models on the ViTextVQA dataset. Our dataset is available at this \href{this https URL}{link} for research purposes.
Abstract (translated)
视觉问答(VQA)是一个复杂的任务,需要同时处理自然语言和图像。最初,这个任务是研究集中在帮助机器理解图像中对象的上下文的方法。然而,在图像中出现的含有明确图像内容的信息的文本没有提及。随着人工智能时代的持续发展,世界各地已经有很多研究关注VQA模型的阅读理解能力。作为一个发展中国家,条件仍然有限,因此在越南,这个问题仍然是一个开放的任务。因此,我们介绍了第一个专门针对图像中出现文本的越南语大型数据集,我们称之为ViTextVQA(越南文本-为基础的视觉问答数据集),它包含超过16,000张图片和超过50,000个问题与答案。通过仔细实验各种最先进的模型,我们揭示了处理OCR文本中标记符的顺序以及选择标记符来形成答案的重要性。这一发现极大地提高了ViTextVQA数据集 baseline模型的性能。我们的数据集可在此链接中获取研究用途:<https://this <https://this link>
URL
https://arxiv.org/abs/2404.10652