Paper Reading AI Learner

SAGHOG: Self-Supervised Autoencoder for Generating HOG Features for Writer Retrieval

2024-04-26 07:48:00
Marco Peer, Florian Kleber, Robert Sablatnig

Abstract

This paper introduces SAGHOG, a self-supervised pretraining strategy for writer retrieval using HOG features of the binarized input image. Our preprocessing involves the application of the Segment Anything technique to extract handwriting from various datasets, ending up with about 24k documents, followed by training a vision transformer on reconstructing masked patches of the handwriting. SAGHOG is then finetuned by appending NetRVLAD as an encoding layer to the pretrained encoder. Evaluation of our approach on three historical datasets, Historical-WI, HisFrag20, and GRK-Papyri, demonstrates the effectiveness of SAGHOG for writer retrieval. Additionally, we provide ablation studies on our architecture and evaluate un- and supervised finetuning. Notably, on HisFrag20, SAGHOG outperforms related work with a mAP of 57.2 % - a margin of 11.6 % to the current state of the art, showcasing its robustness on challenging data, and is competitive on even small datasets, e.g. GRK-Papyri, where we achieve a Top-1 accuracy of 58.0%.

Abstract (translated)

本文介绍了SAGHOG,一种用于使用二值化输入图像的HOG特征进行作家检索的自监督预训练策略。我们的预处理包括应用Segment Anything技术提取各种数据集中的手写文本,最终得到约24k个文档,然后在一篇论文上训练视觉 transformer以重构手写的掩码补丁。接着通过在预训练编码器中添加NetRVLAD作为编码器来微调SAGHOG。在三个历史数据集上的评估表明,SAGHOG在作家检索方面非常有效。此外,我们还提供了对我们架构的消融研究,并评估了无监督和监督微调。值得注意的是,在HisFrag20数据集上,SAGHOG的表现优于相关工作,其map分数为57.2%-比现有技术的水平高出11.6%,展示了其在具有挑战性的数据上的稳健性,同时在小型数据集上也有竞争力,例如GRK-Papyri,在该项目上我们获得了58.0%的Top-1准确率。

URL

https://arxiv.org/abs/2404.17221

PDF

https://arxiv.org/pdf/2404.17221.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot