Paper Reading AI Learner

A large-scale field test on word-image classification in large historical document collections using a traditional and two deep-learning methods

2019-04-17 16:03:14
Lambert Schomaker

Abstract

This technical report describes a practical field test on word-image classification in a very large collection of more than 300 diverse handwritten historical manuscripts, with 1.6 million unique labeled images and more than 11 million images used in testing. Results indicate that several deep-learning tests completely failed (mean accuracy 83%). In the tests with more than 1000 output units (lexical words) in one-hot encoding for classification, performance steeply drops to almost zero percent accuracy, even with a modest size of the pre-final (i.e., penultimate) layer (150 units). A traditional feature method (BOVW) displays a consistent performance over numbers of classes and numbers of training examples (mean accuracy 87%). Additional tests using nearest mean on the output of the pre-final layer of an Inception V3 network, for each book, only yielded mediocre results (mean accuracy 49\%), but was not sensitive to high numbers of classes. Notably, this experiment was only possible on the basis of labels that were harvested on the basis of a traditional method which already works starting from a single labeled image per class. It is expected that the performance of the failed deep learning tests can be repaired, but only on the basis of human handcrafting (sic) of network architecture and hyperparameters. When the failed problematic books are not considered, end-to-end CNN training yields about 95% accuracy. This average is dominated by a large subset of Chinese characters, performances for other script styles being lower.

Abstract (translated)

本技术报告描述了一个关于文字图像分类的实际现场测试,在一个非常大的收集300多个不同的手写历史手稿中,有160万个独特的标记图像和1100多万个图像用于测试。结果表明,几个深度学习测试完全失败(平均准确率83%)。在一个用于分类的热编码中有1000多个输出单元(词汇词)的测试中,性能急剧下降到几乎0%的准确度,即使是具有适度大小的预最终(即倒数第二层)层(150个单元)。传统的特征方法(BOVW)显示了与课程数量和培训示例数量一致的性能(平均精度87%)。对于每本书,在初始v3网络的前最后一层的输出上使用最近的平均值进行的额外测试只产生了平庸的结果(平均精度49%),但对大量的类不敏感。值得注意的是,这个实验只能在基于传统方法的标签上进行,传统方法已经从每个类的单个标签图像开始工作。期望通过对网络体系结构和超参数的人工手工艺(sic)来修复未通过的深度学习测试的性能。如果不考虑失败的有问题的书籍,端到端CNN培训的准确率约为95%。这一平均值主要是汉字的大子集,其他脚本样式的性能较低。

URL

https://arxiv.org/abs/1904.08421

PDF

https://arxiv.org/pdf/1904.08421.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot