Paper Reading AI Learner

Addressing the Length Bias Problem in Document-Level Neural Machine Translation

2023-11-20 08:29:52
Zhuocheng Zhang, Shuhao Gu, Min Zhang, Yang Feng

Abstract

Document-level neural machine translation (DNMT) has shown promising results by incorporating more context information. However, this approach also introduces a length bias problem, whereby DNMT suffers from significant translation quality degradation when decoding documents that are much shorter or longer than the maximum sequence length during training. %i.e., the length bias problem. To solve the length bias problem, we propose to improve the DNMT model in training method, attention mechanism, and decoding strategy. Firstly, we propose to sample the training data dynamically to ensure a more uniform distribution across different sequence lengths. Then, we introduce a length-normalized attention mechanism to aid the model in focusing on target information, mitigating the issue of attention divergence when processing longer sequences. Lastly, we propose a sliding window strategy during decoding that integrates as much context information as possible without exceeding the maximum sequence length. The experimental results indicate that our method can bring significant improvements on several open datasets, and further analysis shows that our method can significantly alleviate the length bias problem.

Abstract (translated)

文档级别神经机器翻译(DNMT)通过引入更多的上下文信息表现出良好的效果。然而,这种方法还引入了一个长度偏差问题,即在训练过程中,DNMT会显著降低对长度过短或过长的文档的翻译质量。换句话说,长度偏差问题。为解决长度偏差问题,我们提出了改进DNMT训练方法、注意机制和解码策略。首先,我们动态采样训练数据以保证不同序列长度的数据分布更加均匀。然后,我们引入了一个长度归一化的注意机制,帮助模型集中注意力于目标信息,减轻了在处理长序列时的关注度偏差问题。最后,我们在解码过程中采用滑动窗口策略,在确保最大序列长度的前提下,整合尽可能多的上下文信息。实验结果表明,我们的方法可以在多个公开数据集上带来显著的改进,而进一步的分析显示,我们的方法可以显著减轻长度偏差问题。

URL

https://arxiv.org/abs/2311.11601

PDF

https://arxiv.org/pdf/2311.11601.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot