Abstract
As text generative models can give increasingly long answers, we tackle the problem of synthesizing long text in digital ink. We show that the commonly used models for this task fail to generalize to long-form data and how this problem can be solved by augmenting the training data, changing the model architecture and the inference procedure. These methods use contrastive learning technique and are tailored specifically for the handwriting domain. They can be applied to any encoder-decoder model that works with digital ink. We demonstrate that our method reduces the character error rate on long-form English data by half compared to baseline RNN and by 16% compared to the previous approach that aims at addressing the same problem. We show that all three parts of the method improve recognizability of generated inks. In addition, we evaluate synthesized data in a human study and find that people perceive most of generated data as real.
Abstract (translated)
作为文本生成模型可以给出越来越长的答案,我们解决了在数字墨水合成长文本的问题。我们证明了用于此任务的常用模型无法泛化到长形式数据,以及如何通过增加训练数据、改变模型架构和推理过程来解决这个问题。这些方法使用了对比学习技术,并专门针对手写领域。它们可以应用于任何使用数字墨水的编码器-解码器模型。我们证明了我们的方法将基于RNN的長英文數據的字符錯誤率降低了一半,比基線RNN低16%。我们证明了所有三个部分的方法都能提高生成的墨水的可识别性。此外,我们在人类研究中评估了合成数据,发现人们认为大多数生成的数据是真实的。
URL
https://arxiv.org/abs/2311.17786