Abstract
Scene text recognition has recently been widely treated as a sequence-to-sequence prediction problem, where traditional fully-connected-LSTM (FC-LSTM) has played a critical role. Due to the limitation of FC-LSTM, existing methods have to convert 2-D feature maps into 1-D sequential feature vectors, resulting in severe damages of the valuable spatial and structural information of text images. In this paper, we argue that scene text recognition is essentially a spatiotemporal prediction problem for its 2-D image inputs, and propose a convolution LSTM (ConvLSTM)-based scene text recognizer, namely, FACLSTM, i.e., Focused Attention ConvLSTM, where the spatial correlation of pixels is fully leveraged when performing sequential prediction with LSTM. Particularly, the attention mechanism is properly incorporated into an efficient ConvLSTM structure via the convolutional operations and additional character center masks are generated to help focus attention on right feature areas. The experimental results on benchmark datasets IIIT5K, SVT and CUTE demonstrate that our proposed FACLSTM performs competitively on the regular, low-resolution and noisy text images, and outperforms the state-of-the-art approaches on the curved text with large margins.
Abstract (translated)
场景文本识别作为一个序列到序列的预测问题,近年来得到了广泛的应用,传统的全连通LSTM(FC-LSTM)在其中起着至关重要的作用。由于FC-LSTM的局限性,现有的方法必须将二维特征图转换为一维序列特征向量,从而严重破坏了文本图像中有价值的空间和结构信息。本文认为,场景文本识别本质上是一个二维图像输入的时空预测问题,提出了一种基于卷积LSTM的场景文本识别器,即FaclTM,即聚焦convLSTM,在进行序列预测时充分利用像素的空间相关性。第1个月。特别是,注意力机制通过卷积操作正确地融入到一个有效的convlstm结构中,并生成额外的字符中心遮罩,以帮助将注意力集中到正确的特征区域。在基准数据集IIIT5K、SVT和CUTE上的实验结果表明,我们提出的FaclTM在规则、低分辨率和噪声文本图像上具有竞争力,在大幅度的曲线文本上优于最先进的方法。
URL
https://arxiv.org/abs/1904.09405