Paper Reading AI Learner

Exploring Paracrawl for Document-level Neural Machine Translation

2023-04-20 11:21:34
Yusser Al Ghussin, Jingyi Zhang, Josef van Genabith

Abstract

Document-level neural machine translation (NMT) has outperformed sentence-level NMT on a number of datasets. However, document-level NMT is still not widely adopted in real-world translation systems mainly due to the lack of large-scale general-domain training data for document-level NMT. We examine the effectiveness of using Paracrawl for learning document-level translation. Paracrawl is a large-scale parallel corpus crawled from the Internet and contains data from various domains. The official Paracrawl corpus was released as parallel sentences (extracted from parallel webpages) and therefore previous works only used Paracrawl for learning sentence-level translation. In this work, we extract parallel paragraphs from Paracrawl parallel webpages using automatic sentence alignments and we use the extracted parallel paragraphs as parallel documents for training document-level translation models. We show that document-level NMT models trained with only parallel paragraphs from Paracrawl can be used to translate real documents from TED, News and Europarl, outperforming sentence-level NMT models. We also perform a targeted pronoun evaluation and show that document-level models trained with Paracrawl data can help context-aware pronoun translation.

Abstract (translated)

在多个数据集上,文档级神经网络机器翻译(NMT)已经比语句级NMT表现更好。然而,在实际应用中,文档级NMT仍然未被广泛采用,主要是因为缺乏大规模通用的训练数据来训练文档级NMT。我们研究了使用Paracrawl学习文档级翻译的方法的有效性。Paracrawl是一个从互联网爬取的大规模并行文本库,包含来自不同领域的数据。官方的Paracrawl文本库以并行句子(从并行页面中提取)发布,因此,以前的工作仅使用Paracrawl学习语句级翻译。在本工作中,我们使用自动句子对齐技术从Paracrawl并行页面中提取并行段落,并将提取的并行段落作为并行文档来训练文档级翻译模型。我们表明,仅使用Paracrawl中的并行段落训练的文档级NMT模型可以用于从TED、新闻和欧拉 l 等网站翻译真实文档,比语句级NMT模型表现更好。我们还进行了有针对性的代词评估,表明使用Paracrawl数据训练的文档级模型可以帮助具有上下文意识的代词翻译。

URL

https://arxiv.org/abs/2304.10216

PDF

https://arxiv.org/pdf/2304.10216.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot