Paper Reading AI Learner

Nearest Neighbor Machine Translation is Meta-Optimizer on Output Projection Layer

2023-05-22 13:38:53
Ruize Gao, Zhirui Zhang, Yichao Du, Lemao Liu, Rui Wang

Abstract

Nearest Neighbor Machine Translation ($k$NN-MT) has achieved great success on domain adaptation tasks by integrating pre-trained Neural Machine Translation (NMT) models with domain-specific token-level retrieval. However, the reasons underlying its success have not been thoroughly investigated. In this paper, we provide a comprehensive analysis of $k$NN-MT through theoretical and empirical studies. Initially, we offer a theoretical interpretation of the working mechanism of $k$NN-MT as an efficient technique to implicitly execute gradient descent on the output projection layer of NMT, indicating that it is a specific case of model fine-tuning. Subsequently, we conduct multi-domain experiments and word-level analysis to examine the differences in performance between $k$NN-MT and entire-model fine-tuning. Our findings suggest that: (1) Incorporating $k$NN-MT with adapters yields comparable translation performance to fine-tuning on in-domain test sets, while achieving better performance on out-of-domain test sets; (2) Fine-tuning significantly outperforms $k$NN-MT on the recall of low-frequency domain-specific words, but this gap could be bridged by optimizing the context representations with additional adapter layers.

Abstract (translated)

Nearest Neighbor Machine Translation ($k$NN-MT) 在跨域任务中取得了巨大的成功,通过将预训练的神经网络机器翻译(NMT)模型与特定域的 token-level 检索相结合,实现了模型微调。然而,其成功的原因并没有得到充分的研究。在本文中,我们通过理论和实证研究提供了 $k$NN-MT 的全面分析。一开始,我们提出了一种理论解释 $k$NN-MT 的工作原理,将其作为在 NMT 输出投影层上隐含执行梯度下降的高效技巧,表明它是模型微调的特定案例。随后,我们进行了跨域实验和单词级别的分析,以检查 $k$NN-MT 和整个模型微调在性能上的差异。我们的发现表明:(1) 将 $k$NN-MT 与适配器相结合可以实现与跨域测试集上的微调相比,在域内测试集上取得相似的翻译性能,但在非域内测试集上取得更好的性能;(2) 微调在低频率域特定单词Recall方面显著优于 $k$NN-MT,但可以通过添加适配器层优化上下文表示来弥平这种差异。

URL

https://arxiv.org/abs/2305.13034

PDF

https://arxiv.org/pdf/2305.13034.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot