Paper Reading AI Learner

Meta ControlNet: Enhancing Task Adaptation via Meta Learning

2023-12-03 01:36:45
Junjie Yang, Jinze Zhao, Peihao Wang, Zhangyang Wang, Yingbin Liang

Abstract

Diffusion-based image synthesis has attracted extensive attention recently. In particular, ControlNet that uses image-based prompts exhibits powerful capability in image tasks such as canny edge detection and generates images well aligned with these prompts. However, vanilla ControlNet generally requires extensive training of around 5000 steps to achieve a desirable control for a single task. Recent context-learning approaches have improved its adaptability, but mainly for edge-based tasks, and rely on paired examples. Thus, two important open issues are yet to be addressed to reach the full potential of ControlNet: (i) zero-shot control for certain tasks and (ii) faster adaptation for non-edge-based tasks. In this paper, we introduce a novel Meta ControlNet method, which adopts the task-agnostic meta learning technique and features a new layer freezing design. Meta ControlNet significantly reduces learning steps to attain control ability from 5000 to 1000. Further, Meta ControlNet exhibits direct zero-shot adaptability in edge-based tasks without any finetuning, and achieves control within only 100 finetuning steps in more complex non-edge tasks such as Human Pose, outperforming all existing methods. The codes is available in this https URL.

Abstract (translated)

扩散图像合成最近引起了广泛的关注。特别是,使用图像提示的控制网络在图像任务(如检测强边缘和生成与这些提示相符的图像)表现出强大的能力。然而,普通的控制网络通常需要训练约5000步才能实现对单一任务的理想控制。最近的功能扩展方法提高了其适应性,但主要针对基于边缘的任务,并且依赖于成对示例。因此,还需要解决两个重要的问题,才能发挥控制网络的全部潜力:(i)对于某些任务实现零散控制,(ii)对于非基于边缘的任务实现更快的适应。在本文中,我们引入了一种新颖的元控制网络方法,它采用任务无关的元学习技术并具有一个新的冻结层设计。元控制网络显著减少了从5000步降低到1000步的学习步骤,以实现控制能力。此外,元控制网络在边缘基于任务上直接具有零散适应能力,在更复杂的非边缘任务(如人体姿态)上,在仅需要100步微调的情况下,实现控制,超过了所有现有方法。代码可在此https:// URL中获取。

URL

https://arxiv.org/abs/2312.01255

PDF

https://arxiv.org/pdf/2312.01255.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot