Paper Reading AI Learner

Transfer Learning across Languages from Someone Else's NMT Model

2019-09-24 14:32:52
Tom Kocmi, Ondřej Bojar

Abstract

Neural machine translation is demanding in terms of training time, hardware resources, size, and quantity of parallel sentences. We propose a simple transfer learning method to recycle already trained models for different language pairs with no need for modifications in model architecture, hyper-parameters, or vocabulary. We achieve better translation quality and shorter convergence times than when training from random initialization. To show the applicability of our method, we recycle a Transformer model trained by different researchers for translating English-to-Czech and used it to seed models for seven language pairs. Our translation models are significantly better even when the re-used model's language pair is not linguistically related to the child language pair, especially for low-resource languages. Our approach needs only one pretrained model for all transferring to all various languages pairs. Additionally, we improve this approach with a simple vocabulary transformation. We analyze the behavior of transfer learning to understand the gains from unrelated languages.

Abstract (translated)

URL

https://arxiv.org/abs/1909.10955

PDF

https://arxiv.org/pdf/1909.10955.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot