Paper Reading AI Learner

FastTextSpotter: A High-Efficiency Transformer for Multilingual Scene Text Spotting

2024-08-27 12:28:41
Alloy Das, Sanket Biswas, Umapada Pal, Josep Llad\'os, Saumik Bhattacharya

Abstract

The proliferation of scene text in both structured and unstructured environments presents significant challenges in optical character recognition (OCR), necessitating more efficient and robust text spotting solutions. This paper presents FastTextSpotter, a framework that integrates a Swin Transformer visual backbone with a Transformer Encoder-Decoder architecture, enhanced by a novel, faster self-attention unit, SAC2, to improve processing speeds while maintaining accuracy. FastTextSpotter has been validated across multiple datasets, including ICDAR2015 for regular texts and CTW1500 and TotalText for arbitrary-shaped texts, benchmarking against current state-of-the-art models. Our results indicate that FastTextSpotter not only achieves superior accuracy in detecting and recognizing multilingual scene text (English and Vietnamese) but also improves model efficiency, thereby setting new benchmarks in the field. This study underscores the potential of advanced transformer architectures in improving the adaptability and speed of text spotting applications in diverse real-world settings. The dataset, code, and pre-trained models have been released in our Github.

Abstract (translated)

场景文本在结构和非结构化环境中的普遍存在给光学字符识别(OCR)带来了巨大的挑战,需要更高效和稳健的文本检测解决方案。本文介绍了一个名为FastTextSpotter的框架,它将Swin Transformer视觉骨干与Transformer Encoder-Decoder架构集成,并新增了一个新的自注意力单元SAC2,以提高处理速度的同时保持准确性。FastTextSpotter已经在多个数据集上进行了验证,包括ICDAR2015(普通文本)和CTW1500(任意形状文本),并与当前最先进的模型进行了比较。我们的结果表明,FastTextSpotter不仅实现了对多语言场景文本(英语和越南语)的卓越检测和识别,还提高了模型的效率,从而在领域内设置了新的基准。本研究突出了高级Transformer架构在提高文本检测应用程序的适应性和速度方面的潜力。数据集、代码和预训练模型已发布在我们的Github上。

URL

https://arxiv.org/abs/2408.14998

PDF

https://arxiv.org/pdf/2408.14998.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot