Abstract
Artificial intelligence algorithms have demonstrated their image classification and segmentation ability in the past decade. However, artificial intelligence algorithms perform less for actual clinical data than those used for simulations. This research aims to present a novel hybrid learning model using self-supervised learning and knowledge distillation, which can achieve sufficient generalization and robustness. The self-attention mechanism and tokens employed in ViT, besides the local-to-global learning approach used in the hybrid model, enable the proposed algorithm to extract a high-dimensional and high-quality feature space from images. To demonstrate the proposed neural network's capability in classifying and extracting feature spaces from medical images, we use it on a dataset of Diabetic Retinopathy images, specifically the EyePACS dataset. This dataset is more complex structurally and challenging regarding damaged areas than other medical images. For the first time in this study, self-supervised learning and knowledge distillation are used to classify this dataset. In our algorithm, for the first time among all self-supervised learning and knowledge distillation models, the test dataset is 50% larger than the training dataset. Unlike many studies, we have not removed any images from the dataset. Finally, our algorithm achieved an accuracy of 79.1% in the linear classifier and 74.36% in the k-NN algorithm for multiclass classification. Compared to a similar state-of-the-art model, our results achieved higher accuracy and more effective representation spaces.
Abstract (translated)
人工智能算法在过去十年中已经展示了其图像分类和分割能力。然而,与用于模拟的数据相比,人工智能算法在实际临床数据上的表现要差。这项研究旨在介绍一种采用自监督学习和知识蒸馏的全新混合学习模型,该模型可以实现足够的泛化能力和鲁棒性。ViT中使用的自注意力机制和使用的token不仅来源于混合模型中的局部到全局学习方法,而且使该算法能够从图像中提取高维和高质量的特征空间。为了展示所提出的神经网络在分类和提取医疗图像特征空间方面的能力,我们使用该模型在糖尿病视网膜病变(Diabetic Retinopathy)图像数据集上进行测试,特别是EyePACS数据集。这个数据集比其他医疗图像更具复杂性和挑战性。在这项研究中,自监督学习和知识蒸馏首次被用于对这个数据集的分类。在我们的算法中,与所有自监督学习和知识蒸馏模型相比,测试数据集是训练数据的两倍大。与许多研究不同,我们没有从数据集中移除任何图像。最后,我们的算法在线性分类器和k-NN算法上的多分类分类准确率分别为79.1%和74.36%。与类似的最先进的模型相比,我们的结果具有更高的准确性和更有效的表示空间。
URL
https://arxiv.org/abs/2410.00779