Abstract
We introduce Lumos, the first end-to-end multimodal question-answering system with text understanding capabilities. At the core of Lumos is a Scene Text Recognition (STR) component that extracts text from first person point-of-view images, the output of which is used to augment input to a Multimodal Large Language Model (MM-LLM). While building Lumos, we encountered numerous challenges related to STR quality, overall latency, and model inference. In this paper, we delve into those challenges, and discuss the system architecture, design choices, and modeling techniques employed to overcome these obstacles. We also provide a comprehensive evaluation for each component, showcasing high quality and efficiency.
Abstract (translated)
我们介绍Lumos,第一个端到端的多模态问题回答系统,具有文本理解功能。Lumos的核心是一个场景文本识别(STR)组件,它从第一人称视角图像中提取文本,用于增强输入到多模态大型语言模型(MM-LLM)。在构建Lumos时,我们遇到了许多与STR质量、整体延迟和模型推理有关的挑战。在本文中,我们深入研究了这些问题,讨论了用于克服这些障碍的系统架构、设计选择和建模技术。我们还对每个组件进行了全面的评估,展示了高质量和效率。
URL
https://arxiv.org/abs/2402.08017