Abstract
Dialogue state tracking (DST) is an important step in dialogue management to keep track of users' beliefs. Existing works fine-tune all language model (LM) parameters to tackle the DST task, which requires significant data and computing resources for training and hosting. The cost grows exponentially in the real-world deployment where dozens of fine-tuned LM are used for different domains and tasks. To reduce parameter size and better utilize cross-task shared information, we propose to use soft prompt token embeddings to learn task properties. Without tuning LM parameters, our method drastically reduces the number of parameters needed to less than 0.5% of prior works while achieves better low-resource DST performance.
Abstract (translated)
对话状态跟踪(DST)是在对话管理中跟踪用户信念的重要步骤。现有的工作优化了所有语言模型(LM)参数以解决DST任务,这需要大量数据和计算资源来进行培训和托管。在实际应用中,优化LM参数的数十个模型被用于不同领域和任务,成本呈指数级增长。为了减小参数大小并更好地利用跨任务共享信息,我们建议使用软提示代币嵌入来学习任务特性。在没有优化LM参数的情况下,我们的方法极大地减少了所需参数数量,直到不到0.5%的前作水平,同时实现了低资源下的更好的DST表现。
URL
https://arxiv.org/abs/2301.10915