Abstract
This paper presents ReasoningRec, a reasoning-based recommendation framework that leverages Large Language Models (LLMs) to bridge the gap between recommendations and human-interpretable explanations. In contrast to conventional recommendation systems that rely on implicit user-item interactions, ReasoningRec employs LLMs to model users and items, focusing on preferences, aversions, and explanatory reasoning. The framework utilizes a larger LLM to generate synthetic explanations for user preferences, subsequently used to fine-tune a smaller LLM for enhanced recommendation accuracy and human-interpretable explanation. Our experimental study investigates the impact of reasoning and contextual information on personalized recommendations, revealing that the quality of contextual and personalized data significantly influences the LLM's capacity to generate plausible explanations. Empirical evaluations demonstrate that ReasoningRec surpasses state-of-the-art methods by up to 12.5\% in recommendation prediction while concurrently providing human-intelligible explanations. The code is available here: this https URL.
Abstract (translated)
本文介绍了ReasoningRec,一个基于推理的推荐框架,该框架利用大型语言模型(LLMs)来弥合推荐与人类可理解解释之间的差距。与依赖于隐式用户-项目交互的传统推荐系统不同,ReasoningRec采用LLMs对用户和项目进行建模,侧重于偏好、厌恶及解释性推理。该框架使用较大的LLM生成用户的合成偏好解释,随后用于微调较小的LLM以提升推荐准确性和人类可理解的解释。我们的实验研究探讨了推理和上下文信息对个性化推荐的影响,表明上下文和个人化数据的质量显著影响LLMs生成合理解释的能力。实证评估显示,ReasoningRec在推荐预测上比最先进的方法高出12.5%,同时提供人类可以理解的解释。代码可在此处获得:此 https URL。
URL
https://arxiv.org/abs/2410.23180