Paper Reading AI Learner

GasHis-Transformer: A Multi-scale Visual Transformer Approach for Gastric Histopathology Image Classification

2021-04-29 17:46:00
Haoyuan Chen, Chen Li, Xiaoyan Li, Ge Wang, Weiming Hu, Yixin Li, Wanli Liu, Changhao Sun, Yudong Yao, Marcin Grzegorzek

Abstract

For deep learning methods applied to the diagnosis of gastric cancer intelligently, existing methods concentrate more on Convolutional Neural Networks (CNN) but no approaches are available using Visual Transformer (VT). VT's efficient and stable deep learning models with the most recent application in the field of computer vision, which is capable of improving the recognition of global information in images. In this paper, a multi-scale visual transformer model (GasHis-Transformer) is proposed for a gastric histopathology image classification (GHIC) task, which enables the automatic classification of gastric histological images of abnormal and normal cancer by obtained by optical microscopy to facilitate the medical work of histopathologists. This GasHis-Transformer model is built on two fundamental modules, including a global information module (GIM) and a local information module (LIM). In the experiment, an open source hematoxylin and eosin (H&E) stained gastric histopathology dataset with 280 abnormal or normal images are divided into training, validation, and test sets at a ratio of 1:1:2 first. Then, GasHis-Transformer obtains precision, recall, F1-score, and accuracy on the testing set of 98.0%, 100.0%, 96.0%, and 98.0%. Furthermore, a contrast experiment also tests the generalization ability of the proposed GatHis-Transformer model with a lymphoma image dataset including 374 images and a breast cancer dataset including 1390 images in two extended experiments and achieves an accuracy of 83.9% and 89.4%, respectively. Finally, GasHis-Transformer model demonstrates high classification performance and shows its effectiveness and enormous potential in GHIC tasks.

Abstract (translated)

URL

https://arxiv.org/abs/2104.14528

PDF

https://arxiv.org/pdf/2104.14528.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot