Abstract
Translating major language resources to build minor language resources becomes a widely-used approach. Particularly in translating complex data points composed of multiple components, it is common to translate each component separately. However, we argue that this practice often overlooks the interrelation between components within the same data point. To address this limitation, we propose a novel MT pipeline that considers the intra-data relation in implementing MT for training data. In our MT pipeline, all the components in a data point are concatenated to form a single translation sequence and subsequently reconstructed to the data components after translation. We introduce a Catalyst Statement (CS) to enhance the intra-data relation, and Indicator Token (IT) to assist the decomposition of a translated sequence into its respective data components. Through our approach, we have achieved a considerable improvement in translation quality itself, along with its effectiveness as training data. Compared with the conventional approach that translates each data component separately, our method yields better training data that enhances the performance of the trained model by 2.690 points for the web page ranking (WPR) task, and 0.845 for the question generation (QG) task in the XGLUE benchmark.
Abstract (translated)
将主要语言资源翻译成构建次要语言资源已成为一种广泛使用的方法。尤其是在将由多个组件组成的复杂数据点进行翻译时,通常会将每个组件单独翻译。然而,我们认为这种做法经常忽视同一数据点内组件之间的相互关系。为了应对这一局限,我们提出了一个新颖的MT管道,在实现MT训练数据时考虑了数据点内组件之间的内部分析关系。在我们的MT管道中,数据点中的所有组件都被连接成一个单独的翻译序列,然后在翻译后重构为数据组件。我们引入了一个催化剂语句(CS)来增强数据点内组件之间的相互关系,和一个指示词标记(IT)来辅助将翻译序列分解为其相应数据组件。通过我们的方法,我们在翻译质量本身以及作为训练数据的有效性方面都取得了显著的改进。与将每个数据组件单独翻译的传统方法相比,我们的方法产生了更好的训练数据,该数据可以提高训练模型的在线性(WPR)任务的性能2.690分,而在XGLUE基准中的问题生成(QG)任务的性能0.845分。
URL
https://arxiv.org/abs/2404.16257