Abstract
Text-guided diffusion models have revolutionized generative tasks by producing high-fidelity content from text descriptions. They have also enabled an editing paradigm where concepts can be replaced through text conditioning (e.g., a dog to a tiger). In this work, we explore a novel approach: instead of replacing a concept, can we enhance or suppress the concept itself? Through an empirical study, we identify a trend where concepts can be decomposed in text-guided diffusion models. Leveraging this insight, we introduce ScalingConcept, a simple yet effective method to scale decomposed concepts up or down in real input without introducing new elements. To systematically evaluate our approach, we present the WeakConcept-10 dataset, where concepts are imperfect and need to be enhanced. More importantly, ScalingConcept enables a variety of novel zero-shot applications across image and audio domains, including tasks such as canonical pose generation and generative sound highlighting or removal.
Abstract (translated)
文本引导的扩散模型通过从文字描述中生成高保真内容,彻底改变了生成任务。它们还实现了一种编辑范式,可以通过文本条件化(例如将狗替换成老虎)来替换概念。在这项工作中,我们探索了一种新颖的方法:不是替换一个概念,而是能否增强或抑制该概念本身?通过实证研究,我们发现了一个趋势,在文本引导的扩散模型中可以分解概念。利用这一见解,我们提出了ScalingConcept,一种简单而有效的方法,可以在真实输入中放大或缩小分解出的概念,而不引入新元素。为了系统地评估我们的方法,我们介绍了WeakConcept-10数据集,在这个数据集中,概念不完善且需要增强。更重要的是,ScalingConcept使图像和音频领域的多种零样本应用成为可能,包括生成标准姿势和生成突出显示或移除声音等任务。
URL
https://arxiv.org/abs/2410.24151