Paper Reading AI Learner

MTCue: Learning Zero-Shot Control of Extra-Textual Attributes by Leveraging Unstructured Context in Neural Machine Translation

2023-05-25 10:06:08
Sebastian Vincent, Robert Flynn, Carolina Scarton

Abstract

Efficient utilisation of both intra- and extra-textual context remains one of the critical gaps between machine and human translation. Existing research has primarily focused on providing individual, well-defined types of context in translation, such as the surrounding text or discrete external variables like the speaker's gender. This work introduces MTCue, a novel neural machine translation (NMT) framework that interprets all context (including discrete variables) as text. MTCue learns an abstract representation of context, enabling transferability across different data settings and leveraging similar attributes in low-resource scenarios. With a focus on a dialogue domain with access to document and metadata context, we extensively evaluate MTCue in four language pairs in both translation directions. Our framework demonstrates significant improvements in translation quality over a parameter-matched non-contextual baseline, as measured by BLEU (+0.88) and Comet (+1.58). Moreover, MTCue significantly outperforms a "tagging" baseline at translating English text. Analysis reveals that the context encoder of MTCue learns a representation space that organises context based on specific attributes, such as formality, enabling effective zero-shot control. Pre-training on context embeddings also improves MTCue's few-shot performance compared to the "tagging" baseline. Finally, an ablation study conducted on model components and contextual variables further supports the robustness of MTCue for context-based NMT.

Abstract (translated)

有效地利用内文本和外部文本上下文仍然是机器翻译和人类翻译之间的一个关键差距。现有的研究主要关注提供个人、明确类型的上下文,例如周围的文本或说话者性别等离散外部变量。这项工作介绍了MTCue,一个新的神经网络机器翻译框架,它将所有上下文(包括离散变量)解释为文本。MTCue学习了一种抽象上下文表示,使得在不同数据设置下的可移植性得以实现,并在低资源情况下利用类似的属性。重点关注有文档和元数据上下文访问的对话领域,我们广泛评估了MTCue在两种翻译方向的四对语言之间的性能。我们的框架通过BLEU(+0.88)和Comet(+1.58)的性能测量表现出了翻译质量的重大改进。此外,MTCue在翻译英语文本方面比“标签”基准框架表现得更好。分析表明,MTCue的上下文编码器学习了一个表示空间,以基于特定的属性(如正式性)组织上下文,从而实现有效的零次控制。预处理上下文嵌入的训练也提高了MTCue的少量次性能,与“标签”基准框架相比。最后,对模型组件和上下文变量进行的 ablation研究进一步支持了MTCue对基于上下文的机器翻译的鲁棒性。

URL

https://arxiv.org/abs/2305.15904

PDF

https://arxiv.org/pdf/2305.15904.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot