Abstract
We introduce a new architecture for personalization of text-to-image diffusion models, coined Mixture-of-Attention (MoA). Inspired by the Mixture-of-Experts mechanism utilized in large language models (LLMs), MoA distributes the generation workload between two attention pathways: a personalized branch and a non-personalized prior branch. MoA is designed to retain the original model's prior by fixing its attention layers in the prior branch, while minimally intervening in the generation process with the personalized branch that learns to embed subjects in the layout and context generated by the prior branch. A novel routing mechanism manages the distribution of pixels in each layer across these branches to optimize the blend of personalized and generic content creation. Once trained, MoA facilitates the creation of high-quality, personalized images featuring multiple subjects with compositions and interactions as diverse as those generated by the original model. Crucially, MoA enhances the distinction between the model's pre-existing capability and the newly augmented personalized intervention, thereby offering a more disentangled subject-context control that was previously unattainable. Project page: this https URL
Abstract (translated)
我们提出了一种名为混合注意(MoA)的新架构,用于对文本到图像扩散模型的个性化。它受到大型语言模型(LLMs)中使用的混合专家机制的启发。MoA 将生成任务分配给两个关注路径:一个个性化的分支和一个非个性化的先验分支。MoA 通过在先验分支的注意力层中固定注意力层,最小化对生成过程的干预,保留原始模型的先验。通过设计一个新路由机制,管理每个层中像素的分布,以优化个性化与通用内容创作的融合。一旦训练完成,MoA 有助于创建具有丰富主题和互动的个性化图像,其构成和交互与原始模型生成的图像的质量相当。关键的是,MoA 增强了模型预先存在的功能与新增加的个性化干预之间的区别,从而提供了更复杂的主题上下文控制,这是 previously不可达的。项目页面:这个链接:<https://this.html>
URL
https://arxiv.org/abs/2404.11565