Paper Reading AI Learner

Retinal Image Restoration using Transformer and Cycle-Consistent Generative Adversarial Network

2023-03-03 14:10:47
Alnur Alimanov, Md Baharul Islam

Abstract

Medical imaging plays a significant role in detecting and treating various diseases. However, these images often happen to be of too poor quality, leading to decreased efficiency, extra expenses, and even incorrect diagnoses. Therefore, we propose a retinal image enhancement method using a vision transformer and convolutional neural network. It builds a cycle-consistent generative adversarial network that relies on unpaired datasets. It consists of two generators that translate images from one domain to another (e.g., low- to high-quality and vice versa), playing an adversarial game with two discriminators. Generators produce indistinguishable images for discriminators that predict the original images from generated ones. Generators are a combination of vision transformer (ViT) encoder and convolutional neural network (CNN) decoder. Discriminators include traditional CNN encoders. The resulting improved images have been tested quantitatively using such evaluation metrics as peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and qualitatively, i.e., vessel segmentation. The proposed method successfully reduces the adverse effects of blurring, noise, illumination disturbances, and color distortions while significantly preserving structural and color information. Experimental results show the superiority of the proposed method. Our testing PSNR is 31.138 dB for the first and 27.798 dB for the second dataset. Testing SSIM is 0.919 and 0.904, respectively.

Abstract (translated)

医学成像在检测和治疗各种疾病中发挥着重要的作用。然而,这些图像往往质量不佳,导致效率降低、额外支出、甚至错误诊断。因此,我们提出了一种利用视觉转换器和卷积神经网络的图像增强方法。这种方法构建了一个循环一致性的生成对抗网络,依赖于两个独立的数据集。它由两个生成器组成,从一种领域到另一种领域(例如,低质量到高质量,反之亦然),与两个判别器进行对抗游戏。生成器产生判别器无法分辨的图像,以便判别器预测原始图像。生成器由视觉转换器(ViT)编码器和卷积神经网络(CNN)解码器组成。判别器包括传统的CNN编码器。这些方法改进的图像通过使用例如峰值信噪比(PSNR)、结构相似性指数测量(SSIM)和定性评估指标(例如,分割 vessel)进行量化测试。该方法成功地减少了模糊、噪声、照明干扰和色彩失真的不利影响,同时 significantly 保留了结构和颜色信息。实验结果证明了该方法的优越性。我们的测试 PSNR 分别为第一个数据集的 31.138 dB 和第二个数据集的 27.798 dB。测试 SSIM 分别为 0.919 和 0.904。

URL

https://arxiv.org/abs/2303.01939

PDF

https://arxiv.org/pdf/2303.01939.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot