Abstract
This paper presents a new hierarchical vision Transformer for image style transfer, called Strips Window Attention Transformer (S2WAT), which serves as an encoder of encoder-transfer-decoder architecture. With hierarchical features, S2WAT can leverage proven techniques in other fields of computer vision, such as feature pyramid networks (FPN) or U-Net, to image style transfer in future works. However, the existing window-based Transformers will cause a problem that the stylized images will be grid-like when introducing them into image style transfer directly. To solve this problem, we propose S2WAT whose representation is computed with Strips Window Attention (SpW Attention). The SpW Attention can integrate both local information and long-range dependencies in horizontal and vertical directions by a novel feature fusion scheme named Attn Merge. Moreover, previous window-based Transformers require that the resolution of features needs to be divisible by window size which limits the inputs of arbitrary size. In this paper, we take advantages of padding & un-padding operations to make S2WAT support inputs of arbitrary size. Qualitative and quantitative experiments demonstrate that S2WAT achieves comparable performance of state-of-the-art CNN-based, Flow-based and Transformer-based approaches.
Abstract (translated)
URL
https://arxiv.org/abs/2210.12381