Abstract
Large language models have drastically changed the prospects of AI by introducing technologies for more complex natural language processing. However, current methodologies to train such LLMs require extensive resources including but not limited to large amounts of data, expensive machinery, and lengthy training. To solve this problem, this paper proposes a new tokenization method inspired by universal Lempel-Ziv-Welch data compression that compresses repetitive phrases into multi-word tokens. With MultiTok as a new tokenizing tool, we show that language models are able to be trained notably more efficiently while offering a similar accuracy on more succinct and compressed training data. In fact, our results demonstrate that MultiTok achieves a comparable performance to the BERT standard as a tokenizer while also providing close to 2.5x faster training with more than 30% less training data.
Abstract (translated)
大型语言模型通过引入更复杂的自然语言处理技术,极大地改变了人工智能的前景。然而,训练这些大型语言模型(LLMs)的方法需要大量的资源,包括但不限于大量数据、昂贵的设备和长时间的训练。为了解决这个问题,本文提出了一种受通用Lempel-Ziv-Welch数据压缩启发的新标记化方法,该方法将重复的短语压缩成多词标记。使用MultiTok作为新的标记工具,我们展示了语言模型能够在更简洁且压缩过的训练数据上显著更高效地进行训练,同时保持类似的准确性。实际上,我们的结果显示,与BERT标准标记器相比,MultiTok实现了可比的表现,并提供了接近2.5倍的更快训练速度以及超过30%更少的训练数据需求。
URL
https://arxiv.org/abs/2410.21548