Paper Reading AI Learner

Script Identification in Natural Scene Image and Video Frame using Attention based Convolutional-LSTM Network

2018-07-23 16:20:08
Ankan Kumar Bhunia, Aishik Konwer, Ayan Kumar Bhunia, Abir Bhowmick, Partha P. Roy, Umapada Pal

Abstract

Script identification plays a significant role in analysing documents and videos. In this paper, we focus on the problem of script identification in scene text images and video scripts. Because of low image quality, complex background and similar layout of characters shared by some scripts like Greek, Latin, etc., text recognition in those cases become challenging. In this paper, we propose a novel method that involves extraction of local and global features using CNN-LSTM framework and weighting them dynamically for script identification. First, we convert the images into patches and feed them into a CNN-LSTM framework. Attention-based patch weights are calculated applying softmax layer after LSTM. Next, we do patch-wise multiplication of these weights with corresponding CNN to yield local features. Global features are also extracted from last cell state of LSTM. We employ a fusion technique which dynamically weights the local and global features for an individual patch. Experiments have been done in four public script identification datasets: SIW-13, CVSI2015, ICDAR-17 and MLe2e. The proposed framework achieves superior results in comparison to conventional methods.

Abstract (translated)

脚本识别在分析文档和视频中起着重要作用。在本文中,我们关注场景文本图像和视频脚本中的脚本识别问题。由于图像质量低,背景复杂,以及希腊语,拉丁语等一些脚本共享的类似字符布局,在这些情况下的文本识别变得具有挑战性。在本文中,我们提出了一种新方法,该方法涉及使用CNN-LSTM框架提取局部和全局特征,并动态加权以进行脚本识别。首先,我们将图像转换为补丁并将其提供给CNN-LSTM框架。在LSTM之后应用softmax层计算基于注意力的贴片重量。接下来,我们将这些权重与相应的CNN进行补片式乘法,以产生局部特征。全局特征也从LSTM的最后一个单元状态中提取。我们采用融合技术,动态权衡单个补丁的本地和全局特征。已经在四个公共脚本识别数据集中进行了实验:SIW-13,CVSI2015,ICDAR-17和MLe2e。与传统方法相比,所提出的框架实现了优异的结果。

URL

https://arxiv.org/abs/1801.00470

PDF

https://arxiv.org/pdf/1801.00470.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot