Abstract
Quantitative and numerical comprehension in language is an important task in many fields like education and finance, but still remains a challenging task for language models. While tool and calculator usage has shown to be helpful to improve mathematical reasoning in large pretrained decoder-only language models, this remains unexplored for smaller language models with encoders. In this paper, we propose Pre-Calc, a simple pre-finetuning objective of learning to use the calculator for both encoder-only and encoder-decoder architectures, formulated as a discriminative and generative task respectively. We pre-train BERT and RoBERTa for discriminative calculator use and Flan-T5 for generative calculator use on the MAWPS, SVAMP, and AsDiv-A datasets, which improves performance on downstream tasks that require numerical understanding. Our code and data are available at this https URL.
Abstract (translated)
量化与数值理解在语言中的任务在许多领域(如教育和金融)中非常重要,但仍然是对自然语言处理模型来说具有挑战性的任务。虽然工具和计算器的使用已经被证明在大型预训练 Decoder-Only 语言模型中有助于提高数学推理,但对于较小的具有编码器的自然语言处理模型来说,这仍然是一个未探索的挑战。在本文中,我们提出了 Pre-Calc,一个简单的前预训练目标,旨在学习使用计算器来同时实现编码器-仅和编码器-decoder 架构,分别表示为具有区分性和生成性的任务。我们在 MAWPS、SVAMP 和 AsDiv-A 数据集上预训练 BERT 和 RoBERTa,用于区分性计算器的使用,并使用 Flan-T5 进行生成性计算器的使用,这有助于提高下游需要数值理解的任务的性能。我们的代码和数据可在此链接处获得:https://github.com/yourgmt/pre-calc
URL
https://arxiv.org/abs/2404.14355