Abstract
This paper proposes a method for generating images of customized objects specified by users. The method is based on a general framework that bypasses the lengthy optimization required by previous approaches, which often employ a per-object optimization paradigm. Our framework adopts an encoder to capture high-level identifiable semantics of objects, producing an object-specific embedding with only a single feed-forward pass. The acquired object embedding is then passed to a text-to-image synthesis model for subsequent generation. To effectively blend a object-aware embedding space into a well developed text-to-image model under the same generation context, we investigate different network designs and training strategies, and propose a simple yet effective regularized joint training scheme with an object identity preservation loss. Additionally, we propose a caption generation scheme that become a critical piece in fostering object specific embedding faithfully reflected into the generation process, while keeping control and editing abilities. Once trained, the network is able to produce diverse content and styles, conditioned on both texts and objects. We demonstrate through experiments that our proposed method is able to synthesize images with compelling output quality, appearance diversity, and object fidelity, without the need of test-time optimization. Systematic studies are also conducted to analyze our models, providing insights for future work.
Abstract (translated)
本论文提出了一种方法,用于生成由用户指定的定制化对象的图像。这种方法基于一个通用框架,绕过了以往方法需要漫长优化的步骤,通常采用针对每个对象的优化范式。我们的框架采用编码器来捕捉对象的高度可识别语义,产生只有一个forward pass的特定对象嵌入。获取的特定对象嵌入后传递给文本到图像合成模型进行后续生成。为了在相同的生成上下文中有效地将对象意识嵌入空间与一个不断发展的文本到图像模型融合,我们研究不同的网络设计和训练策略,并提出了一个简单的但有效的 regularized Joint TrainingScheme,同时提出了一种标题生成方案,成为促进对象特定嵌入准确反映生成过程的关键部分,同时保持控制和编辑能力。一旦训练完成,网络能够产生基于文本和对象的多种内容和风格,我们通过实验证明了我们的提议方法能够生成具有令人信服的输出质量、外观多样性和对象逼真度的图像,无需测试时间优化。系统研究还分析了我们的模型,为未来工作提供了见解。
URL
https://arxiv.org/abs/2304.02642