Abstract
Recent advances in handwritten text recognition enabled to recognize whole documents in an end-to-end way: the Document Attention Network (DAN) recognizes the characters one after the other through an attention-based prediction process until reaching the end of the document. However, this autoregressive process leads to inference that cannot benefit from any parallelization optimization. In this paper, we propose Faster DAN, a two-step strategy to speed up the recognition process at prediction time: the model predicts the first character of each text line in the document, and then completes all the text lines in parallel through multi-target queries and a specific document positional encoding scheme. Faster DAN reaches competitive results compared to standard DAN, while being at least 4 times faster on whole single-page and double-page images of the RIMES 2009, READ 2016 and MAURDOR datasets. Source code and trained model weights are available at this https URL.
Abstract (translated)
手写文本识别的最新进展使得能够以端到端的方式识别整个文档:文档注意力网络(Dan)通过基于注意力的预测过程依次识别每个字符,直到到达文档的结尾。然而,这种自回归过程会导致无法从并行优化中受益的推断。在本文中,我们提出了更快的Dan,一种两步策略,以加快预测时的身份识别过程:模型预测文档中每个文本行的第一个字符,然后通过多目标查询和特定的文档位置编码方案并行完成所有文本行。相比于标准Dan,更快的Dan取得了与它相比竞争水平的结果,同时在RIMES 2009、READ 2016和MAURDOR数据集整页和双页图像中的速度至少提高了4倍。源代码和训练模型权重可在该httpsURL上提供。
URL
https://arxiv.org/abs/2301.10593