Abstract
Many new proposals for scene text recognition (STR) models have been introduced in recent years. While each claim to have pushed the boundary of the technology, a holistic and fair comparison has been largely missing in the field due to the inconsistent choices of training and evaluation datasets. This paper addresses this difficulty with three major contributions. First, we examine the inconsistencies of training and evaluation datasets, and the performance gap results from inconsistencies. Second, we introduce a unified four-stage STR framework that most existing STR models fit into. Using this framework allows for the extensive evaluation of previously proposed STR modules and the discovery of previously unexplored module combinations. Third, we analyze the module-wise contributions to performance in terms of accuracy, speed, and memory demand, under one consistent set of training and evaluation datasets. Such analyses clean up the hindrance on the current comparisons to understand the performance gain of the existing modules.
Abstract (translated)
近年来,对场景文本识别(str)模型提出了许多新的建议。虽然每一项声称都推动了技术的发展,但由于培训和评估数据集的选择不一致,该领域基本上缺少了一个全面和公平的比较。本文通过三个主要贡献来解决这一难题。首先,我们检查培训和评估数据集的不一致性,以及由不一致性导致的绩效差距。其次,我们引入了一个统一的四阶段str框架,大多数现有的str模型都适合这个框架。使用这个框架可以对先前提议的str模块进行广泛的评估,并发现先前未探索的模块组合。第三,我们在一组一致的训练和评估数据集下,从准确性、速度和内存需求方面分析模块对性能的贡献。这些分析消除了当前比较的障碍,以了解现有模块的性能增益。
URL
https://arxiv.org/abs/1904.01906