Abstract
Nowadays, scene text recognition has attracted more and more attention due to its diverse applications. Most state-of-the-art methods adopt an encoder-decoder framework with the attention mechanism, autoregressively generating text from left to right. Despite the convincing performance, this sequential decoding strategy constrains inference speed. Conversely, non-autoregressive models provide faster, simultaneous predictions but often sacrifice accuracy. Although utilizing an explicit language model can improve performance, it burdens the computational load. Besides, separating linguistic knowledge from vision information may harm the final prediction. In this paper, we propose an alternative solution, using a parallel and iterative decoder that adopts an easy-first decoding strategy. Furthermore, we regard text recognition as an image-based conditional text generation task and utilize the discrete diffusion strategy, ensuring exhaustive exploration of bidirectional contextual information. Extensive experiments demonstrate that the proposed approach achieves superior results on the benchmark datasets, including both Chinese and English text images.
Abstract (translated)
如今,由于其各种应用场景,场景文本识别已经吸引了越来越多的关注。最先进的方法采用编码器-解码器框架,并使用注意力机制自左至右生成文本。尽管具有令人信服的性能,但这种序列解码策略限制了推理速度。相反,无自回归模型提供更快的同时预测,但通常会牺牲准确性。尽管使用显式语言模型可以提高性能,但它增加了计算负担。此外,将语言知识与视觉信息分离可能损害最终预测。在本文中,我们提出了另一种解决方案,使用一种并行和迭代解码器,采用简单的先解码策略。此外,我们将文本识别视为基于图像的条件下文本生成任务,并使用离散扩散策略,确保探索双向上下文信息的全面性。大量实验证明,与基准数据集相比,所提出的方案在包括中文和英文文本图像在内的各个领域都取得了卓越的性能。
URL
https://arxiv.org/abs/2312.11923