Abstract
We investigate the task of adapting image generative models to different datasets without finetuneing. To this end, we introduce Semantica, an image-conditioned diffusion model capable of generating images based on the semantics of a conditioning image. Semantica is trained exclusively on web-scale image pairs, that is it receives a random image from a webpage as conditional input and models another random image from the same webpage. Our experiments highlight the expressivity of pretrained image encoders and necessity of semantic-based data filtering in achieving high-quality image generation. Once trained, it can adaptively generate new images from a dataset by simply using images from that dataset as input. We study the transfer properties of Semantica on ImageNet, LSUN Churches, LSUN Bedroom and SUN397.
Abstract (translated)
我们研究了在不进行微调的情况下将图像生成模型适应不同数据集的任务。为此,我们引入了Semantica,一种基于条件图像语义生成图像的图像条件扩散模型。Semantica仅在网页规模图像对上训练,即它从网页上接收一个随机图像作为条件输入,并从同一网页上模型另一个随机图像。我们的实验强调了预训练图像编码器的表现力,以及实现高质量图像生成的必要性。一旦训练完成,它可以动态地从数据集中生成新图像,只需将该数据集中的图像作为输入。我们在ImageNet、LSUN Churchs、LSUN Bedroom和SUN397上研究了Semantica的传输特性。
URL
https://arxiv.org/abs/2405.14857