Abstract
Large language models (LLMs) excel in most NLP tasks but also require expensive cloud servers for deployment due to their size, while smaller models that can be deployed on lower cost (e.g., edge) devices, tend to lag behind in terms of response quality. Therefore in this work we propose a hybrid inference approach which combines their respective strengths to save cost and maintain quality. Our approach uses a router that assigns queries to the small or large model based on the predicted query difficulty and the desired quality level. The desired quality level can be tuned dynamically at test time to seamlessly trade quality for cost as per the scenario requirements. In experiments our approach allows us to make up to 40% fewer calls to the large model, with no drop in response quality.
Abstract (translated)
大语言模型(LLMs)在大多数自然语言处理任务中表现出色,但它们需要昂贵的云服务器来进行部署,因为它们的规模较大。而较小的模型,可以在较低成本(例如边缘设备)上部署,往往在响应质量方面垫后被超越。因此,在本研究中,我们提出了一个混合推理方法,该方法结合了它们各自的优点以节省成本并保持质量。我们的方法使用一个路由器,根据预测的查询难度和所需质量水平将查询分配给小或大模型。所需质量水平可以在测试时间动态调整,以根据场景需求在不牺牲质量的情况下换取成本。在实验中,我们的方法使我们最多可以减少对大模型的调用次数,而不会降低响应质量。
URL
https://arxiv.org/abs/2404.14618