Abstract
This paper aims to efficiently enable large language models (LLMs) to use external knowledge and goal guidance in conversational recommender system (CRS) tasks. Advanced LLMs (e.g., ChatGPT) are limited in domain-specific CRS tasks for 1) generating grounded responses with recommendation-oriented knowledge, or 2) proactively leading the conversations through different dialogue goals. In this work, we first analyze those limitations through a comprehensive evaluation, showing the necessity of external knowledge and goal guidance which contribute significantly to the recommendation accuracy and language quality. In light of this finding, we propose a novel ChatCRS framework to decompose the complex CRS task into several sub-tasks through the implementation of 1) a knowledge retrieval agent using a tool-augmented approach to reason over external Knowledge Bases and 2) a goal-planning agent for dialogue goal prediction. Experimental results on two multi-goal CRS datasets reveal that ChatCRS sets new state-of-the-art benchmarks, improving language quality of informativeness by 17% and proactivity by 27%, and achieving a tenfold enhancement in recommendation accuracy.
Abstract (translated)
本文旨在有效地使大型语言模型(LLMs)能够使用外部知识和目标指导在会话推荐系统(CRS)任务中进行高效运用。先进的LLM(例如,ChatGPT)在领域特定CRS任务上存在限制,其一是生成具有推荐导向知识的有根回答,二是通过不同的对话目标主动引导对话。在这项工作中,我们通过全面的评估分析了这些限制,展示了外部知识和目标指导在提高推荐准确性和语言质量方面的重要性。鉴于这一发现,我们提出了一个新型的ChatCRS框架,通过实现知识检索代理和对话目标预测代理来分解复杂的CRS任务为几个子任务。在两个多目标CRSDataset上的实验结果表明,ChatCRS取得了最先进的基准,将信息提供性的语言质量提高了17%,活力提高了27%,并且推荐准确率提高了十倍。
URL
https://arxiv.org/abs/2405.01868