Abstract
Contrary to popular belief, Optical Character Recognition (OCR) remains a challenging problem when text occurs in unconstrained environments, like natural scenes, due to geometrical distortions, complex backgrounds, and diverse fonts. In this paper, we present a segmentation-free OCR system that combines deep learning methods, synthetic training data generation, and data augmentation techniques. We render synthetic training data using large text corpora and over 2000 fonts. To simulate text occurring in complex natural scenes, we augment extracted samples with geometric distortions and with a proposed data augmentation technique - alpha-compositing with background textures. Our models employ a convolutional neural network encoder to extract features from text images. Inspired by the recent progress in neural machine translation and language modeling, we examine the capabilities of both recurrent and convolutional neural networks in modeling the interactions between input elements.
Abstract (translated)
与人们普遍认为的相反,光学字符识别(OCR)仍然是一个具有挑战性的问题,当文本出现在不受约束的环境中时,例如自然场景,由于几何变形、复杂的背景和不同的字体。在本文中,我们提出了一个无分割的OCR系统,它结合了深度学习方法、综合训练数据生成和数据增强技术。我们使用大文本语料库和2000多种字体呈现综合训练数据。为了模拟在复杂自然场景中出现的文本,我们使用几何变形和一种建议的数据增强技术(alpha与背景纹理的合成)来增强提取的样本。我们的模型使用卷积神经网络编码器从文本图像中提取特征。在神经机器翻译和语言建模的最新进展的启发下,我们研究了递归和卷积神经网络在建模输入元素之间的相互作用的能力。
URL
https://arxiv.org/abs/1906.01969