Paper Reading AI Learner

Towards Unsupervised Domain Adaptation via Domain-Transformer

2022-02-24 02:30:15
Ren Chuan-Xian, Zhai Yi-Ming, Luo You-Wei, Li Meng-Xue

Abstract

As a vital problem in pattern analysis and machine intelligence, Unsupervised Domain Adaptation (UDA) studies how to transfer an effective feature learner from a labeled source domain to an unlabeled target domain. Plenty of methods based on Convolutional Neural Networks (CNNs) have achieved promising results in the past decades. Inspired by the success of Transformers, some methods attempt to tackle UDA problem by adopting pure transformer architectures, and interpret the models by applying the long-range dependency strategy at image patch-level. However, the algorithmic complexity is high and the interpretability seems weak. In this paper, we propose the Domain-Transformer (DoT) for UDA, which integrates the CNN-backbones and the core attention mechanism of Transformers from a new perspective. Specifically, a plug-and-play domain-level attention mechanism is proposed to learn the sample correspondence between domains. This is significantly different from existing methods which only capture the local interactions among image patches. Instead of explicitly modeling the distribution discrepancy from either domain-level or class-level, DoT learns transferable features by achieving the local semantic consistency across domains, where the domain-level attention and manifold regularization are explored. Then, DoT is free of pseudo-labels and explicit domain discrepancy optimization. Theoretically, DoT is connected with the optimal transportation algorithm and statistical learning theory. The connection provides a new insight to understand the core component of Transformers. Extensive experiments on several benchmark datasets validate the effectiveness of DoT.

Abstract (translated)

URL

https://arxiv.org/abs/2202.13777

PDF

https://arxiv.org/pdf/2202.13777.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot