Abstract
Super-resolution, which aims to reconstruct high-resolution images from low-resolution images, has drawn considerable attention and has been intensively studied in computer vision and remote sensing communities. The super-resolution technology is especially beneficial for Unmanned Aerial Vehicles (UAV), as the amount and resolution of images captured by UAV are highly limited by physical constraints such as flight altitude and load capacity. In the wake of the successful application of deep learning methods in the super-resolution task, in recent years, a series of super-resolution algorithms have been developed. In this paper, for the super-resolution of UAV images, a novel network based on the state-of-the-art Swin Transformer is proposed with better efficiency and competitive accuracy. Meanwhile, as one of the essential applications of the UAV is land cover and land use monitoring, simple image quality assessments such as the Peak-Signal-to-Noise Ratio (PSNR) and the Structural Similarity Index Measure (SSIM) are not enough to comprehensively measure the performance of an algorithm. Therefore, we further investigate the effectiveness of super-resolution methods using the accuracy of semantic segmentation. The code will be available at this https URL.
Abstract (translated)
超分辨率(Super-resolution)旨在从低分辨率图像中提取高分辨率图像,引起了广泛关注并在计算机视觉和遥感社区中进行了深入研究。超分辨率技术特别有利于无人机(UAV),因为无人机捕获的图像数量和质量受到高度受限的物理限制,如飞行高度和负载能力。随着深度学习方法在超分辨率任务中成功应用,近年来开发了一系列的超分辨率算法。在本文中,针对无人机图像的超分辨率,我们提出了一种基于最新 Swin Transformer 的更高效且具有竞争力准确的新网络。同时,由于无人机一个重要的应用是土地覆盖和用地监测,简单的图像质量评估,如峰值信号到噪声比(PSNR)和结构相似性指数测量(SSIM)不足以全面衡量算法的性能。因此,我们进一步利用语义分割的准确性研究超分辨率方法的有效性。代码将在此 https URL 可用。
URL
https://arxiv.org/abs/2303.10232