Abstract
Neural metrics for machine translation evaluation, such as COMET, exhibit significant improvements in their correlation with human judgments, as compared to traditional metrics based on lexical overlap, such as BLEU. Yet, neural metrics are, to a great extent, "black boxes" returning a single sentence-level score without transparency about the decision-making process. In this work, we develop and compare several neural explainability methods and demonstrate their effectiveness for interpreting state-of-the-art fine-tuned neural metrics. Our study reveals that these metrics leverage token-level information that can be directly attributed to translation errors, as assessed through comparison of token-level neural saliency maps with Multidimensional Quality Metrics (MQM) annotations and with synthetically-generated critical translation errors. To ease future research, we release our code at: this https URL.
Abstract (translated)
神经网络对机器翻译评估的衡量指标(如COMET)在与人类判断的相关性方面表现出显著改进,相比之下,与基于词义重叠的传统衡量指标(如BLEU)相比,这些指标的性能有了显著提高。然而,神经网络衡量指标在很大程度上是“黑盒子”,只返回一个句子级别的得分,而决策过程却没有透明度。在这项工作中,我们开发和比较了几种神经网络解释性方法,并证明了它们对于解释最先进的精细调整神经网络衡量指标的有效性。我们的研究表明,这些指标利用了一些可以直接归因于翻译错误的句子级别的信息,通过比较句子级别的神经网络重要性映射与多维质量度量(MQM)注释和合成的关键翻译错误注释,来评估这些指标的性能。为了便于未来的研究,我们发布了我们的代码,该代码存储在以下httpsURL中。
URL
https://arxiv.org/abs/2305.11806