Abstract
Transformer-based models have achieved remarkable results in low-level vision tasks including image super-resolution (SR). However, early Transformer-based approaches that rely on self-attention within non-overlapping windows encounter challenges in acquiring global information. To activate more input pixels globally, hybrid attention models have been proposed. Moreover, training by solely minimizing pixel-wise RGB losses, such as L1, have been found inadequate for capturing essential high-frequency details. This paper presents two contributions: i) We introduce convolutional non-local sparse attention (NLSA) blocks to extend the hybrid transformer architecture in order to further enhance its receptive field. ii) We employ wavelet losses to train Transformer models to improve quantitative and subjective performance. While wavelet losses have been explored previously, showing their power in training Transformer-based SR models is novel. Our experimental results demonstrate that the proposed model provides state-of-the-art PSNR results as well as superior visual performance across various benchmark datasets.
Abstract (translated)
基于Transformer的模型在低级视觉任务中已经取得了显著的成果,包括图像超分辨率(SR)。然而,早期基于自注意力的Transformer方法在获取全局信息时遇到了挑战。为了全局激活更多的输入像素,混合注意模型已经被提出。此外,仅仅通过最小化像素级的RGB损失(如L1)进行训练,如之前的尝试,发现不足以捕捉基本的高频细节。本文提出了两个贡献:i)我们引入了卷积非局部稀疏注意(NLSA)模块,以扩展混合Transformer架构,从而进一步增强其接收域。ii)我们使用了谱聚类损失来训练Transformer模型,以提高数量和主观性能。尽管谱聚类损失之前已经被探索过,但它们在训练Transformer-based SR模型方面的表现是新颖的。我们的实验结果表明,与最先进的PSNR结果相比,所提出的模型在各种基准数据集上的视觉表现都更出色。
URL
https://arxiv.org/abs/2404.11273