Paper Reading AI Learner

Support-Target Protocol for Meta-Learning

2021-04-08 12:41:33
Su Lu, Han-Jia Ye, De-Chuan Zhan

Abstract

The support/query (S/Q) training protocol is widely used in meta-learning. S/Q protocol trains a task-specific model on S and then evaluates it on Q to optimize the meta-model using query loss, which depends on size and quality of Q. In this paper, we study a new S/T protocol for meta-learning. Assuming that we have access to the theoretically optimal model T for a task, we can directly match the task-specific model trained on S to T. S/T protocol offers a more accurate evaluation since it does not rely on possibly biased and noisy query instances. There are two challenges in putting S/T protocol into practice. Firstly, we have to determine how to match the task-specific model to T. To this end, we minimize the discrepancy between them on a fictitious dataset generated by adversarial learning, and distill the prediction ability of T to the task-specific model. Secondly, we usually do not have ready-made optimal models. As an alternative, we construct surrogate target models by fine-tuning on local tasks the globally pre-trained meta-model, maintaining both efficiency and veracity.

Abstract (translated)

URL

https://arxiv.org/abs/2104.03736

PDF

https://arxiv.org/pdf/2104.03736.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot