Paper Reading AI Learner

TextHawk: Exploring Efficient Fine-Grained Perception of Multimodal Large Language Models

2024-04-14 09:48:37
Ya-Qi Yu, Minghui Liao, Jihao Wu, Yongxin Liao, Xiaoyu Zheng, Wei Zeng

Abstract

Multimodal Large Language Models (MLLMs) have shown impressive results on various multimodal tasks. However, most existing MLLMs are not well suited for document-oriented tasks, which require fine-grained image perception and information compression. In this paper, we present TextHawk, a MLLM that is specifically designed for document-oriented tasks, while preserving the general capabilities of MLLMs. TextHawk is aimed to explore efficient fine-grained perception by designing four dedicated components. Firstly, a ReSampling and ReArrangement (ReSA) module is proposed to reduce the redundancy in the document texts and lower the computational cost of the MLLM. We explore encoding the positions of each local feature by presenting Scalable Positional Embeddings (SPEs), which can preserve the scalability of various image sizes. A Query Proposal Network (QPN) is then adopted to initialize the queries dynamically among different sub-images. To further enhance the fine-grained visual perceptual ability of the MLLM, we design a Multi-Level Cross-Attention (MLCA) mechanism that captures the hierarchical structure and semantic relations of document images. Furthermore, we create a new instruction-tuning dataset for document-oriented tasks by enriching the multimodal document data with Gemini Pro. We conduct extensive experiments on both general and document-oriented MLLM benchmarks, and show that TextHawk outperforms the state-of-the-art methods, demonstrating its effectiveness and superiority in fine-grained document perception and general abilities.

Abstract (translated)

多模态大型语言模型(MLLMs)在各种多模态任务上表现出了出色的效果。然而,现有的MLLM并不适合面向文档任务的场景,这些任务需要细粒度的图像感知和信息压缩。在本文中,我们提出了TextHawk,一种专门为文档任务设计的MLLM,同时保留了MLLM的通用能力。TextHawk旨在通过设计四个专用组件来探索高效细粒度感知。首先,我们提出了一个去重和重新排列(ReSA)模块,以减少文档文本中的冗余并降低MLLM的计算成本。我们通过展示可扩展的位置嵌入(SPEs)来编码每个局部特征的位置。接着,我们采用了查询建议网络(QPN)来在不同的子图像之间动态初始化查询。为了进一步提高MLLM的细粒度视觉感知能力,我们设计了一个多级交叉注意(MLCA)机制,可以捕捉文档图像的层次结构和语义关系。此外,我们还为文档导向任务创建了一个新的指令调整数据集,通过增加Gemini Pro来丰富多模态文档数据。我们在通用和文档导向的MLLM基准上进行了广泛的实验,并证明了TextHawk在细粒度文档感知和一般能力上超过了最先进的Methods,证明了其有效性和优越性。

URL

https://arxiv.org/abs/2404.09204

PDF

https://arxiv.org/pdf/2404.09204.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot