Abstract
In this work, we target the task of text-driven style transfer in the context of text-to-image (T2I) diffusion models. The main challenge is consistent structure preservation while enabling effective style transfer effects. The past approaches in this field directly concatenate the content and style prompts for a prompt-level style injection, leading to unavoidable structure distortions. In this work, we propose a novel solution to the text-driven style transfer task, namely, Adaptive Style Incorporation~(ASI), to achieve fine-grained feature-level style incorporation. It consists of the Siamese Cross-Attention~(SiCA) to decouple the single-track cross-attention to a dual-track structure to obtain separate content and style features, and the Adaptive Content-Style Blending (AdaBlending) module to couple the content and style information from a structure-consistent manner. Experimentally, our method exhibits much better performance in both structure preservation and stylized effects.
Abstract (translated)
在这项工作中,我们针对文本到图像(T2I)扩散模型的文本驱动风格迁移任务。主要挑战是在保持一致的结构同时实现有效的风格迁移效果。该领域过去的做法是将提示级的风格注入直接连接内容和风格提示,导致结构变形不可避免。在这项工作中,我们提出了一种新的解决方案,称为自适应风格集成(ASI),以实现细粒度的特征级别风格集成。它包括 Siamese Cross-Attention(SiCA)将单道跨注意解耦为双道结构以获得单独的内容和风格特征,以及自适应内容风格混合(AdaBlending)模块以以结构一致的方式将内容和风格信息耦合。实验证明,我们的方法在结构和风格保留方面都表现出更好的性能。
URL
https://arxiv.org/abs/2404.06835