Abstract
This paper introduces SAGHOG, a self-supervised pretraining strategy for writer retrieval using HOG features of the binarized input image. Our preprocessing involves the application of the Segment Anything technique to extract handwriting from various datasets, ending up with about 24k documents, followed by training a vision transformer on reconstructing masked patches of the handwriting. SAGHOG is then finetuned by appending NetRVLAD as an encoding layer to the pretrained encoder. Evaluation of our approach on three historical datasets, Historical-WI, HisFrag20, and GRK-Papyri, demonstrates the effectiveness of SAGHOG for writer retrieval. Additionally, we provide ablation studies on our architecture and evaluate un- and supervised finetuning. Notably, on HisFrag20, SAGHOG outperforms related work with a mAP of 57.2 % - a margin of 11.6 % to the current state of the art, showcasing its robustness on challenging data, and is competitive on even small datasets, e.g. GRK-Papyri, where we achieve a Top-1 accuracy of 58.0%.
Abstract (translated)
本文介绍了SAGHOG,一种用于使用二值化输入图像的HOG特征进行作家检索的自监督预训练策略。我们的预处理包括应用Segment Anything技术提取各种数据集中的手写文本,最终得到约24k个文档,然后在一篇论文上训练视觉 transformer以重构手写的掩码补丁。接着通过在预训练编码器中添加NetRVLAD作为编码器来微调SAGHOG。在三个历史数据集上的评估表明,SAGHOG在作家检索方面非常有效。此外,我们还提供了对我们架构的消融研究,并评估了无监督和监督微调。值得注意的是,在HisFrag20数据集上,SAGHOG的表现优于相关工作,其map分数为57.2%-比现有技术的水平高出11.6%,展示了其在具有挑战性的数据上的稳健性,同时在小型数据集上也有竞争力,例如GRK-Papyri,在该项目上我们获得了58.0%的Top-1准确率。
URL
https://arxiv.org/abs/2404.17221