Abstract
Digital note-taking is gaining popularity, offering a durable, editable, and easily indexable way of storing notes in the vectorized form, known as digital ink. However, a substantial gap remains between this way of note-taking and traditional pen-and-paper note-taking, a practice still favored by a vast majority. Our work, InkSight, aims to bridge the gap by empowering physical note-takers to effortlessly convert their work (offline handwriting) to digital ink (online handwriting), a process we refer to as Derendering. Prior research on the topic has focused on the geometric properties of images, resulting in limited generalization beyond their training domains. Our approach combines reading and writing priors, allowing training a model in the absence of large amounts of paired samples, which are difficult to obtain. To our knowledge, this is the first work that effectively derenders handwritten text in arbitrary photos with diverse visual characteristics and backgrounds. Furthermore, it generalizes beyond its training domain into simple sketches. Our human evaluation reveals that 87% of the samples produced by our model on the challenging HierText dataset are considered as a valid tracing of the input image and 67% look like a pen trajectory traced by a human.
Abstract (translated)
数字笔记变得越来越受欢迎,提供了一种在矢量形式中进行持久、可编辑和易于索引的笔记存储方式,称为数字墨水。然而,这种笔记方式与传统手写笔记之间仍然存在很大的差距,这是一种大多数人都仍然喜欢的方式。我们的工作InkSight旨在通过赋予物理笔记者将他们的作品(离线手写)转换为数字墨水(在线手写)的轻松方式,从而弥合这个差距。我们称之为Derendering。之前关于这个主题的研究主要集中在图像的的几何性质上,导致在训练领域之外的应用有限。我们的方法结合了阅读和写作的先验,使得在缺乏大量配对样本的情况下训练模型成为可能,这些样本很难获得。据我们所知,这是第一个有效地将手写文本从任意照片中消退的带有丰富视觉特性和背景的图像的工作。此外,它还扩展到其训练领域之外,呈现出简单的草图。我们的人工评估显示,由我们的模型在具有挑战性的HierText数据集上生成的样本中,87%被认为是对输入图像的有效跟踪,而67%看起来像是由人类追踪的笔迹。
URL
https://arxiv.org/abs/2402.05804