Abstract
Despite the impressive capabilities of Multimodal Large Language Models (MLLMs) in integrating text and image modalities, challenges remain in accurately interpreting detailed visual elements. This paper presents an empirical study on enhancing MLLMs with state-of-the-art (SOTA) object detection and Optical Character Recognition models to improve fine-grained image understanding and reduce hallucination in responses. Our research investigates the embedding-based infusion of detection information, the impact of such infusion on the MLLMs' original abilities, and the interchangeability of detection models. We conduct systematic experiments with models such as LLaVA-1.5, DINO, and PaddleOCRv2, revealing that our approach not only refines MLLMs' performance in specific visual tasks but also maintains their original strengths. The resulting enhanced MLLMs outperform SOTA models on 9 out of 10 benchmarks, achieving an improvement of up to 12.99% on the normalized average score, marking a notable advancement in multimodal understanding. We release our codes to facilitate further exploration into the fine-grained multimodal dialogue capabilities of MLLMs.
Abstract (translated)
尽管多模态大型语言模型(MLLMs)在整合文本和图像模态方面具有令人印象深刻的 capabilities,但准确解释详细视觉元素仍然具有挑战性。本文进行了一项关于通过最先进的(SOTA)目标检测和光学字符识别模型增强MLLMs以提高细粒度图像理解和减少响应幻影的实证研究。我们的研究探讨了基于嵌入的检测信息 infusion 对MLLMs原始能力的影响以及检测模型的互换性。我们使用LLaVA-1.5、DINO和PaddleOCRv2等模型进行了系统实验,结果表明,我们的方法不仅提高了MLLMs在特定视觉任务上的性能,而且保持了其原始优势。增强后的MLLMs在9个基准测试中的表现优于SOTA模型,平均分数提高了12.99%,表明在多模态理解方面取得了显著的进展。我们将我们的代码发布出来,以促进对MLLMs细粒度多模态对话能力的进一步探索。
URL
https://arxiv.org/abs/2401.17981