Paper Reading AI Learner

Rethinking Image-based Table Recognition Using Weakly Supervised Methods

2023-03-14 06:03:57
Nam Tuan Ly, Atsuhiro Takasu, Phuc Nguyen, Hideaki Takeda

Abstract

Most of the previous methods for table recognition rely on training datasets containing many richly annotated table images. Detailed table image annotation, e.g., cell or text bounding box annotation, however, is costly and often subjective. In this paper, we propose a weakly supervised model named WSTabNet for table recognition that relies only on HTML (or LaTeX) code-level annotations of table images. The proposed model consists of three main parts: an encoder for feature extraction, a structure decoder for generating table structure, and a cell decoder for predicting the content of each cell in the table. Our system is trained end-to-end by stochastic gradient descent algorithms, requiring only table images and their ground-truth HTML (or LaTeX) representations. To facilitate table recognition with deep learning, we create and release WikiTableSet, the largest publicly available image-based table recognition dataset built from Wikipedia. WikiTableSet contains nearly 4 million English table images, 590K Japanese table images, and 640k French table images with corresponding HTML representation and cell bounding boxes. The extensive experiments on WikiTableSet and two large-scale datasets: FinTabNet and PubTabNet demonstrate that the proposed weakly supervised model achieves better, or similar accuracies compared to the state-of-the-art models on all benchmark datasets.

Abstract (translated)

以往的表格识别方法大多数依赖于包含大量 richly annotate table 图像的训练数据集。但详细的 table 图像标注,例如单元或文本框标注,通常是昂贵的,且往往主观。在本文中,我们提出了一种弱监督模型,名为 WSTabNet,用于表格识别,它仅依赖于 table 图像的 HTML(或 LaTeX)代码级别的标注。该模型由三个主要部分组成:特征提取编码器,结构生成解码器,以及单元解码器,用于预测表格每个单元的内容。我们的系统通过随机梯度下降算法进行了全面训练,只需要 table 图像及其真值 HTML(或 LaTeX)表示。为了促进深度学习中的表格识别,我们创建了并发布了 WikiTableSet,这是从维基百科构建的最大的公开可用的图像based 表格识别数据集。WikiTableSet 包含近 4 百万个英语表格图像、590 千个日本表格图像和640 千个法语表格图像,并具有相应的 HTML 表示和单元框限定符。在 WikiTableSet 和两个大规模数据集:FinTabNet 和PubTabNet 的广泛实验中,证明了所述弱监督模型在所有基准数据集上比最先进的模型 achieve 更好或类似的精度。

URL

https://arxiv.org/abs/2303.07641

PDF

https://arxiv.org/pdf/2303.07641.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot