Paper Reading AI Learner

Dynamic Scheduled Sampling with Imitation Loss for Neural Text Generation

2023-01-31 16:41:06
Xiang Lin, Prathyusha Jwalapuram, Shafiq Joty

Abstract

State-of-the-art neural text generation models are typically trained to maximize the likelihood of each token in the ground-truth sequence conditioned on the previous target tokens. However, during inference, the model needs to make a prediction conditioned on the tokens generated by itself. This train-test discrepancy is referred to as exposure bias. Scheduled sampling is a curriculum learning strategy that gradually exposes the model to its own predictions during training to mitigate this bias. Most of the proposed approaches design a scheduler based on training steps, which generally requires careful tuning depending on the training setup. In this work, we introduce Dynamic Scheduled Sampling with Imitation Loss (DySI), which maintains the schedule based solely on the training time accuracy, while enhancing the curriculum learning by introducing an imitation loss, which attempts to make the behavior of the decoder indistinguishable from the behavior of a teacher-forced decoder. DySI is universally applicable across training setups with minimal tuning. Extensive experiments and analysis show that DySI not only achieves notable improvements on standard machine translation benchmarks, but also significantly improves the robustness of other text generation models.

Abstract (translated)

最先进的神经网络文本生成模型通常训练目标是最大化基于先前目标代币的ground-truth序列中每个代币的概率。然而,在推理时,模型需要基于自己生成代币的概率进行预测,这被称为暴露偏差。计划采样是一种课程学习策略,在训练过程中逐渐将模型暴露于自己的预测,以减轻这种偏差。大多数提议的方法基于训练步骤设计了一个调度器,这通常需要根据训练设置进行小心调整。在这个工作中,我们介绍了动态计划采样与模仿损失(DySI),它基于训练时间的精度维持调度,同时通过引入模仿损失来增强课程学习,该损失试图使解码器的行为与教师强制解码器的行为几乎无法区分。DySI在各种类型的训练设置中具有最小的调整。广泛的实验和分析表明,DySI不仅在标准机器翻译基准测试中取得了显著的改进,还显著改进了其他文本生成模型的鲁棒性。

URL

https://arxiv.org/abs/2301.13753

PDF

https://arxiv.org/pdf/2301.13753.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot