Paper Reading AI Learner

AMOM: Adaptive Masking over Masking for Conditional Masked Language Model

2023-03-13 20:34:56
Yisheng Xiao, Ruiyang Xu, Lijun Wu, Juntao Li, Tao Qin, Yan-Tie Liu, Min Zhang

Abstract

Transformer-based autoregressive (AR) methods have achieved appealing performance for varied sequence-to-sequence generation tasks, e.g., neural machine translation, summarization, and code generation, but suffer from low inference efficiency. To speed up the inference stage, many non-autoregressive (NAR) strategies have been proposed in the past few years. Among them, the conditional masked language model (CMLM) is one of the most versatile frameworks, as it can support many different sequence generation scenarios and achieve very competitive performance on these tasks. In this paper, we further introduce a simple yet effective adaptive masking over masking strategy to enhance the refinement capability of the decoder and make the encoder optimization easier. Experiments on \textbf{3} different tasks (neural machine translation, summarization, and code generation) with \textbf{15} datasets in total confirm that our proposed simple method achieves significant performance improvement over the strong CMLM model. Surprisingly, our proposed model yields state-of-the-art performance on neural machine translation (\textbf{34.62} BLEU on WMT16 EN$\to$RO, \textbf{34.82} BLEU on WMT16 RO$\to$EN, and \textbf{34.84} BLEU on IWSLT De$\to$En) and even better performance than the \textbf{AR} Transformer on \textbf{7} benchmark datasets with at least \textbf{2.2$\times$} speedup. Our code is available at GitHub.

Abstract (translated)

使用Transformer-based自回归(AR)方法已经取得了对于各种序列到序列生成任务具有吸引人的性能,例如神经网络机器翻译、摘要、和代码生成,但是 inference 效率较低。为了加速推断阶段,过去几年中提出了许多非自回归(NAR)策略。其中,条件掩码语言模型(CMLM)是最为灵活的框架之一,因为它可以支持许多不同的序列生成场景,并在这些任务上实现非常竞争力的性能。在本文中,我们进一步介绍了一种简单但有效的自适应掩码覆盖策略,以增强解码器的细化能力,并使其优化更容易。对三个不同的任务(神经网络机器翻译、摘要、和代码生成)使用总共15个数据集进行实验确认,我们提出的简单方法在神经网络机器翻译任务上实现了显著的性能改进,比强大的CMLM模型表现更好。令人惊讶地,我们提出的模型在神经网络机器翻译任务上获得了最先进的性能(WMT16 EN$ o$RO BLEU为34.62,WMT16 RO$ o$EN BLEU为34.82,IWSLT De$ o$En BLEU为34.84),甚至在与ARTransformer相比速度至少有2.2倍加速的7个基准数据集上表现更好。我们的代码可在GitHub上获取。

URL

https://arxiv.org/abs/2303.07457

PDF

https://arxiv.org/pdf/2303.07457.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot