Abstract
The recent GAN inversion methods have been able to successfully invert the real image input to the corresponding editable latent code in StyleGAN. By combining with the language-vision model (CLIP), some text-driven image manipulation methods are proposed. However, these methods require extra costs to perform optimization for a certain image or a new attribute editing mode. To achieve a more efficient editing method, we propose a new Text-driven image Manipulation framework via Space Alignment (TMSA). The Space Alignment module aims to align the same semantic regions in CLIP and StyleGAN spaces. Then, the text input can be directly accessed into the StyleGAN space and be used to find the semantic shift according to the text description. The framework can support arbitrary image editing mode without additional cost. Our work provides the user with an interface to control the attributes of a given image according to text input and get the result in real time. Ex tensive experiments demonstrate our superior performance over prior works.
Abstract (translated)
最近的GAN反转方法成功将真实图像输入到StyleGAN中的可编辑隐式代码中。通过与语言视觉模型(CLIP)结合,提出了一些基于文本的图像操纵方法。然而,这些方法需要额外的费用来优化针对特定图像或新属性编辑模式。为了实现更高效的编辑方法,我们提出了一个新的基于空间对齐的文本图像操纵框架(TMSA)。空间对齐模块旨在对齐CLIP和StyleGAN空间中的相同语义区域。然后,文本输入可以直接访问到StyleGAN空间中,并根据文本描述查找语义变化。框架可以支持任意图像编辑模式,而无需额外的成本。我们的工作提供了用户界面,根据文本输入控制给定图像的属性,并在实时中获得结果。刺激性实验证明了我们比先前工作表现出更好的性能。
URL
https://arxiv.org/abs/2301.10670