Abstract
Recent studies show strong generative performance in domain translation especially by using transfer learning techniques on the unconditional generator. However, the control between different domain features using a single model is still challenging. Existing methods often require additional models, which is computationally demanding and leads to unsatisfactory visual quality. In addition, they have restricted control steps, which prevents a smooth transition. In this paper, we propose a new approach for high-quality domain translation with better controllability. The key idea is to preserve source features within a disentangled subspace of a target feature space. This allows our method to smoothly control the degree to which it preserves source features while generating images from an entirely new domain using only a single model. Our extensive experiments show that the proposed method can produce more consistent and realistic images than previous works and maintain precise controllability over different levels of transformation. The code is available at this https URL.
Abstract (translated)
最近的研究表明,在跨域翻译中,使用无条件生成器并使用转移学习技术可以表现出强大的生成性能。然而,使用单个模型来控制不同域特征之间的控制仍然是一项挑战。现有的方法通常需要额外的模型,这会导致计算资源的浪费并产生不满意的视觉质量。此外,它们具有限制的控制步骤,这阻碍了平滑的过渡。在本文中,我们提出了一种新方法,以提供更控制性的高质量跨域翻译。其核心思想是在目标特征空间的分离子空间中保留源特征。这使我们的方法可以平滑地控制它保留源特征的程度,同时仅使用单个模型从一个全新的域中生成图像。我们的广泛实验表明,新方法能够产生比先前工作更为一致和真实的图像,并在不同的转换级别上保持精确的控制。代码在此httpsURL可用。
URL
https://arxiv.org/abs/2303.11545