Abstract
End-to-end scene text spotting, which aims to read the text in natural images, has garnered significant attention in recent years. However, recent state-of-the-art methods usually incorporate detection and recognition simply by sharing the backbone, which does not directly take advantage of the feature interaction between the two tasks. In this paper, we propose a new end-to-end scene text spotting framework termed SwinTextSpotter v2, which seeks to find a better synergy between text detection and recognition. Specifically, we enhance the relationship between two tasks using novel Recognition Conversion and Recognition Alignment modules. Recognition Conversion explicitly guides text localization through recognition loss, while Recognition Alignment dynamically extracts text features for recognition through the detection predictions. This simple yet effective design results in a concise framework that requires neither an additional rectification module nor character-level annotations for the arbitrarily-shaped text. Furthermore, the parameters of the detector are greatly reduced without performance degradation by introducing a Box Selection Schedule. Qualitative and quantitative experiments demonstrate that SwinTextSpotter v2 achieved state-of-the-art performance on various multilingual (English, Chinese, and Vietnamese) benchmarks. The code will be available at \href{this https URL}{SwinTextSpotter v2}.
Abstract (translated)
近年来,端到端图像中场景文本检测已经引起了广泛关注。然而,最近的最先进方法通常通过共享骨干来检测和识别,并没有直接利用两个任务之间的特征交互。在本文中,我们提出了一个新的端到端场景文本检测框架SwinTextSpotter v2,旨在找到文本检测和识别之间的更好协同作用。具体来说,我们通过引入新的识别转换和识别对齐模块来增强两个任务之间的关系。识别转换明确指导文本定位通过识别损失,而识别对齐通过检测预测动态提取文本特征。这种简单而有效的设计使得不需要额外的矩形选择模块或字符级别注释来处理任意形状的文本。此外,通过引入Box选择计划,检测器的参数在性能不降低的情况下大大减少。通过质量和数量实验证明,SwinTextSpotter v2在各种多语言(英语、汉语和越南)基准测试中都实现了最先进的成绩。代码将在此处公开,\href{this <https://this URL>}{SwinTextSpotter v2}.
URL
https://arxiv.org/abs/2401.07641