Abstract
Inspired by recent successes in neural machine translation and image caption generation, we present an attention based encoder decoder model (AED) to recognize Vietnamese Handwritten Text. The model composes of two parts: a DenseNet for extracting invariant features, and a Long Short-Term Memory network (LSTM) with an attention model incorporated for generating output text (LSTM decoder), which are connected from the CNN part to the attention model. The input of the CNN part is a handwritten text image and the target of the LSTM decoder is the corresponding text of the input image. Our model is trained end-to-end to predict the text from a given input image since all the parts are differential components. In the experiment section, we evaluate our proposed AED model on the VNOnDB-Word and VNOnDB-Line datasets to verify its efficiency. The experiential results show that our model achieves 12.30% of word error rate without using any language model. This result is competitive with the handwriting recognition system provided by Google in the Vietnamese Online Handwritten Text Recognition competition.
Abstract (translated)
在最近的成功的神经机器翻译和图像字幕生成的启发下,我们提出了一种基于注意力的编码器解码器模型(AED)来识别越南手写文本。该模型由两部分组成:用于提取不变特征的densenet和用于生成输出文本的注意力模型(lstm解码器)的长短期记忆网络(lstm),该模型从cnn部分连接到注意力模型。CNN部分的输入是手写文本图像,LSTM解码器的目标是输入图像的对应文本。我们的模型经过端到端的训练,以根据给定的输入图像预测文本,因为所有部分都是差分组件。在实验部分,我们对所提出的基于VnonDB字和VnonDB行数据集的AED模型进行了评估,以验证其有效性。实验结果表明,我们的模型在不使用任何语言模型的情况下都能达到12.30%的字错误率。这一结果与谷歌提供的手写识别系统在越南在线手写文本识别竞争中具有竞争性。
URL
https://arxiv.org/abs/1905.05381