Abstract
In medical image analysis, low-resolution images negatively affect the performance of medical image interpretation and may cause misdiagnosis. Single image super-resolution (SISR) methods can improve the resolution and quality of medical images. Currently, super-resolution methods based on generative adversarial networks (GAN) are widely used and have shown very good performance. In this work, we use the Real-Enhanced Super-Resolution Generative Adversarial Network (Real-ESRGAN) model to enhance the resolution and quality of medical images. Unlike natural datasets, medical datasets do not have very high spatial resolution. Transfer learning is one of the effective methods which uses models trained with external datasets (often natural datasets), and fine-tunes them to enhance medical images. In our proposed approach, the pre-trained generator and discriminator networks of the Real-ESRGAN model are fine-tuned using medical image datasets. In this paper, we worked on retinal images and chest X-ray images. We used the STARE dataset of retinal images and Tuberculosis Chest X-rays (Shenzhen) dataset. The proposed model produces more accurate and natural textures, and the output images have better detail and resolution compared to the original Real-ESRGAN model.
Abstract (translated)
URL
https://arxiv.org/abs/2211.00577