Abstract
Existing work has observed that current text-to-image systems do not accurately reflect explicit spatial relations between objects such as 'left of' or 'below'. We hypothesize that this is because explicit spatial relations rarely appear in the image captions used to train these models. We propose an automatic method that, given existing images, generates synthetic captions that contain 14 explicit spatial relations. We introduce the Spatial Relation for Generation (SR4G) dataset, which contains 9.9 millions image-caption pairs for training, and more than 60 thousand captions for evaluation. In order to test generalization we also provide an 'unseen' split, where the set of objects in the train and test captions are disjoint. SR4G is the first dataset that can be used to spatially fine-tune text-to-image systems. We show that fine-tuning two different Stable Diffusion models (denoted as SD$_{SR4G}$) yields up to 9 points improvements in the VISOR metric. The improvement holds in the 'unseen' split, showing that SD$_{SR4G}$ is able to generalize to unseen objects. SD$_{SR4G}$ improves the state-of-the-art with fewer parameters, and avoids complex architectures. Our analysis shows that improvement is consistent for all relations. The dataset and the code will be publicly available.
Abstract (translated)
已有研究表明,当前的文本到图像系统并不能准确地反映对象之间的显式空间关系,如“在...左侧”或“在...下方”。我们假设这是因为用于训练这些模型的图像摘要中很少出现显式空间关系。我们提出了一个自动方法,在给定现有图像的情况下生成包含14个显式空间关系的合成图像摘要。我们还介绍了SR4G数据集,其中包含训练和评估用到的990万图像摘要对。为了测试泛化能力,我们还提供了“未见”分割,其中训练和测试摘要中的物体集合是分离的。SR4G是第一个可用于对文本到图像系统进行空间微调的数据集。我们证明了微调两种不同的Stable Diffusion模型(称为SD$_{SR4G}$)可以实现VISOR指标上的9个点改进。改进在“未见”分割中仍然成立,这表明SD$_{SR4G}$能够将知识扩展到未见过的物体。SR4G以更少的参数取得了最先进的性能,并避免了复杂的设计。我们的分析表明,改进在所有关系上都是一致的。数据集和代码将公开发布。
URL
https://arxiv.org/abs/2403.00587