Abstract
Multimodal Large Language Models (MLLMs) have shown impressive results on various multimodal tasks. However, most existing MLLMs are not well suited for document-oriented tasks, which require fine-grained image perception and information compression. In this paper, we present TextHawk, a MLLM that is specifically designed for document-oriented tasks, while preserving the general capabilities of MLLMs. TextHawk is aimed to explore efficient fine-grained perception by designing four dedicated components. Firstly, a ReSampling and ReArrangement (ReSA) module is proposed to reduce the redundancy in the document texts and lower the computational cost of the MLLM. We explore encoding the positions of each local feature by presenting Scalable Positional Embeddings (SPEs), which can preserve the scalability of various image sizes. A Query Proposal Network (QPN) is then adopted to initialize the queries dynamically among different sub-images. To further enhance the fine-grained visual perceptual ability of the MLLM, we design a Multi-Level Cross-Attention (MLCA) mechanism that captures the hierarchical structure and semantic relations of document images. Furthermore, we create a new instruction-tuning dataset for document-oriented tasks by enriching the multimodal document data with Gemini Pro. We conduct extensive experiments on both general and document-oriented MLLM benchmarks, and show that TextHawk outperforms the state-of-the-art methods, demonstrating its effectiveness and superiority in fine-grained document perception and general abilities.
Abstract (translated)
多模态大型语言模型(MLLMs)在各种多模态任务上表现出了出色的效果。然而,现有的MLLM并不适合面向文档任务的场景,这些任务需要细粒度的图像感知和信息压缩。在本文中,我们提出了TextHawk,一种专门为文档任务设计的MLLM,同时保留了MLLM的通用能力。TextHawk旨在通过设计四个专用组件来探索高效细粒度感知。首先,我们提出了一个去重和重新排列(ReSA)模块,以减少文档文本中的冗余并降低MLLM的计算成本。我们通过展示可扩展的位置嵌入(SPEs)来编码每个局部特征的位置。接着,我们采用了查询建议网络(QPN)来在不同的子图像之间动态初始化查询。为了进一步提高MLLM的细粒度视觉感知能力,我们设计了一个多级交叉注意(MLCA)机制,可以捕捉文档图像的层次结构和语义关系。此外,我们还为文档导向任务创建了一个新的指令调整数据集,通过增加Gemini Pro来丰富多模态文档数据。我们在通用和文档导向的MLLM基准上进行了广泛的实验,并证明了TextHawk在细粒度文档感知和一般能力上超过了最先进的Methods,证明了其有效性和优越性。
URL
https://arxiv.org/abs/2404.09204