Abstract
Inspired by the in-context learning mechanism of large language models (LLMs), a new paradigm of generalizable visual prompt-based image editing is emerging. Existing single-reference methods typically focus on style or appearance adjustments and struggle with non-rigid transformations. To address these limitations, we propose leveraging source-target image pairs to extract and transfer content-aware editing intent to novel query images. To this end, we introduce RelationAdapter, a lightweight module that enables Diffusion Transformer (DiT) based models to effectively capture and apply visual transformations from minimal examples. We also introduce Relation252K, a comprehensive dataset comprising 218 diverse editing tasks, to evaluate model generalization and adaptability in visual prompt-driven scenarios. Experiments on Relation252K show that RelationAdapter significantly improves the model's ability to understand and transfer editing intent, leading to notable gains in generation quality and overall editing performance.
Abstract (translated)
受大型语言模型(LLM)的上下文学习机制启发,一种新的基于视觉提示的一般化图像编辑范式正在兴起。现有的单参考方法通常集中于风格或外观调整,并且在处理非刚性变换时面临挑战。为了解决这些限制,我们提出利用源目标图像对来提取并转移内容感知编辑意图到新的查询图像上。为此,我们引入了 RelationAdapter,这是一个轻量级模块,使基于扩散变压器(DiT)的模型能够从少量示例中有效捕捉和应用视觉变换。此外,我们还推出了包含218种多样化编辑任务的Relation252K数据集,用于评估在视觉提示驱动场景中的模型泛化能力和适应性。 在 Relation252K 上进行的实验表明,RelationAdapter 显著提高了模型理解并转移编辑意图的能力,从而在生成质量和总体编辑性能方面取得了显著提升。
URL
https://arxiv.org/abs/2506.02528