Abstract
Recently introduced ControlNet has the ability to steer the text-driven image generation process with geometric input such as human 2D pose, or edge features. While ControlNet provides control over the geometric form of the instances in the generated image, it lacks the capability to dictate the visual appearance of each instance. We present FineControlNet to provide fine control over each instance's appearance while maintaining the precise pose control capability. Specifically, we develop and demonstrate FineControlNet with geometric control via human pose images and appearance control via instance-level text prompts. The spatial alignment of instance-specific text prompts and 2D poses in latent space enables the fine control capabilities of FineControlNet. We evaluate the performance of FineControlNet with rigorous comparison against state-of-the-art pose-conditioned text-to-image diffusion models. FineControlNet achieves superior performance in generating images that follow the user-provided instance-specific text prompts and poses compared with existing methods. Project webpage: this https URL
Abstract (translated)
近年来引入的ControlNet具有通过几何输入(如人体2D姿势或边缘特征)操纵文本驱动图像生成的能力。然而,ControlNet在控制生成图像的几何形状方面提供了控制,但它缺乏规定每个实例的视觉外观的能力。我们提出了FineControlNet,以提供对每个实例外观的精细控制,同时保持精确的姿势控制能力。具体来说,我们通过人体姿势图像和实例级别的文本提示来开发和演示FineControlNet。实例特定文本提示和2D姿势在潜在空间中的空间对齐允许FineControlNet实现精细控制能力。我们通过与最先进的姿势条件文本到图像扩散模型进行严格的比较来评估FineControlNet的表现。与现有方法相比,FineControlNet在生成遵循用户提供的实例特定文本提示和姿势的图像方面表现出卓越的性能。项目网页:此https URL
URL
https://arxiv.org/abs/2312.09252