Abstract
The widespread use of spreadsheet environments by billions of users presents a unique opportunity for formula-authoring assistance. Although large language models, such as Codex, can assist in general-purpose languages, they are expensive to train and challenging to deploy due to their large model sizes (up to billions of parameters). Moreover, they require hundreds of gigabytes of training data. We present FLAME, a T5-based model trained on Excel formulas that leverages domain insights to achieve competitive performance with a substantially smaller model (60M parameters) and two orders of magnitude less training data. We curate a training dataset using sketch deduplication, introduce an Excel-specific formula tokenizer for our model, and use domain-specific versions of masked span prediction and noisy auto-encoding as pretraining objectives. We evaluate FLAME on formula repair, formula auto-completion, and a novel task called syntax reconstruction. FLAME (60M) can outperform much larger models, such as Codex-Davinci (175B), Codex-Cushman (12B), and CodeT5 (220M), in 6 out of 10 settings.
Abstract (translated)
数十亿人广泛使用电子表格环境给编写公式提供了独特的机会。虽然大型语言模型,如Codex,可以在通用语言中提供帮助,但由于其模型规模庞大(达到数十亿参数),训练和部署都非常困难。此外,它们需要数百GB的训练数据。我们介绍了FLAME,一个基于T5的训练Excel公式的模型,利用领域 insights 实现较小模型(60M参数)的竞争性表现,且训练数据只有两 orders of magnitude 减少。我们使用 Sketch 除法技术创建训练数据集,并为模型引入了Excel 特定公式编辑器,并使用领域特定版本的掩膜跨度预测和噪声自动编码作为训练目标。我们评估了FLAME,以进行公式修复、公式自动完成和名为语法重建的创新性任务。FLAME(60M)可以在6个设置中优于更大模型,如Codex-Davinci(175B)、Codex-Cushman(12B)和CodeT5(220M)。
URL
https://arxiv.org/abs/2301.13779