Abstract
As language models increase in size by the day, methods for efficient inference are critical to leveraging their capabilities for various applications. Prior work has investigated techniques like model pruning, knowledge distillation, and data multiplexing to increase model throughput without sacrificing accuracy. In this paper, we combine two such methods -- structured pruning and data multiplexing -- to compound the speedup gains obtained by either method. Our approach, PruMUX, obtains up to 7.5-29.5X throughput improvement over BERT-base model with accuracy threshold from 80% to 74%. We further study various combinations of parameters (such as sparsity and multiplexing factor) in the two techniques to provide a comprehensive analysis of the tradeoff between accuracy and throughput in the resulting models. We then propose Auto-PruMUX, a meta-level model that can predict the high-performance parameters for pruning and multiplexing given a desired accuracy loss budget, providing a practical method to leverage the combination effectively.
Abstract (translated)
随着语言模型规模的日益增加,高效推理方法对于利用其在不同应用程序中的能力至关重要。先前的工作已经研究了模型修剪、知识蒸馏和数据编码等技术,以在不牺牲准确性的情况下增加模型吞吐量。在本文中,我们将这两种方法结合起来——结构化修剪和数据编码,以加倍 either 方法所取得的速度提升。我们的算法是 PruMUX,它在BERT基础模型的准确性阈值从80%降至74%的情况下,取得了7.5-29.5X的吞吐量改进。我们进一步研究了这两种方法中的各种参数组合(如稀疏性和编码因子),以提供对结果模型准确性和吞吐量之间的权衡的全面分析。我们随后提出了 Auto-PruMUX,一个高级别模型,它可以预测修剪和编码高性能参数,给定所需的准确性损失预算,提供一个有效的方法,有效地利用它们的组合。
URL
https://arxiv.org/abs/2305.14706