Paper Reading AI Learner

Zero-Shot Deep Domain Adaptation

2018-07-23 15:31:54
Kuan-Chuan Peng, Ziyan Wu, Jan Ernst

Abstract

Domain adaptation is an important tool to transfer knowledge about a task (e.g. classification) learned in a source domain to a second, or target domain. Current approaches assume that task-relevant target-domain data is available during training. We demonstrate how to perform domain adaptation when no such task-relevant target-domain data is available. To tackle this issue, we propose zero-shot deep domain adaptation (ZDDA), which uses privileged information from task-irrelevant dual-domain pairs. ZDDA learns a source-domain representation which is not only tailored for the task of interest but also close to the target-domain representation. Therefore, the source-domain task of interest solution (e.g. a classifier for classification tasks) which is jointly trained with the source-domain representation can be applicable to both the source and target representations. Using the MNIST, Fashion-MNIST, NIST, EMNIST, and SUN RGB-D datasets, we show that ZDDA can perform domain adaptation in classification tasks without access to task-relevant target-domain training data. We also extend ZDDA to perform sensor fusion in the SUN RGB-D scene classification task by simulating task-relevant target-domain representations with task-relevant source-domain data. To the best of our knowledge, ZDDA is the first domain adaptation and sensor fusion method which requires no task-relevant target-domain data. The underlying principle is not particular to computer vision data, but should be extensible to other domains.

Abstract (translated)

域适应是将关于在源域中学习的任务(例如,分类)的知识传递到第二或目标域的重要工具。当前的方法假设在训练期间可获得与任务相关的目标域数据。我们演示了当没有这样的任务相关的目标域数据可用时如何执行域自适应。为了解决这个问题,我们提出了零射击深域适配(ZDDA),它使用来自任务无关的双域对的特权信息。 ZDDA学习源域表示,该表示不仅适合于感兴趣的任务,而且还接近目标域表示。因此,与源域表示联合训练的源 - 域任务解决方案(例如,用于分类任务的分类器)可以适用于源和目标表示。使用MNIST,Fashion-MNIST,NIST,EMNIST和SUN RGB-D数据集,我们表明ZDDA可以在分类任务中执行域自适应,而无需访问任务相关的目标域训练数据。我们还通过模拟与任务相关的源域数据的任务相关的目标域表示,扩展ZDDA以在SUN RGB-D场景分类任务中执行传感器融合。据我们所知,ZDDA是第一个领域适应和传感器融合方法,它不需要任务相关的目标域数据。基本原则并不特定于计算机视觉数据,但应该可扩展到其他领域。

URL

https://arxiv.org/abs/1707.01922

PDF

https://arxiv.org/pdf/1707.01922.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot