Abstract
In this study, we reproduced the work done in the paper "XRec: Large Language Models for Explainable Recommendation" by Ma et al. (2024). The original authors introduced XRec, a model-agnostic collaborative instruction-tuning framework that enables large language models (LLMs) to provide users with comprehensive explanations of generated recommendations. Our objective was to replicate the results of the original paper, albeit using Llama 3 as the LLM for evaluation instead of GPT-3.5-turbo. We built on the source code provided by Ma et al. (2024) to achieve our goal. Our work extends the original paper by modifying the input embeddings or deleting the output embeddings of XRec's Mixture of Experts module. Based on our results, XRec effectively generates personalized explanations and its stability is improved by incorporating collaborative information. However, XRec did not consistently outperform all baseline models in every metric. Our extended analysis further highlights the importance of the Mixture of Experts embeddings in shaping the explanation structures, showcasing how collaborative signals interact with language modeling. Through our work, we provide an open-source evaluation implementation that enhances accessibility for researchers and practitioners alike. Our complete code repository can be found at this https URL.
Abstract (translated)
在这项研究中,我们重现了Ma等人(2024)在论文"XRec:用于可解释推荐的大规模语言模型"中的工作。原作者介绍了XRec,这是一种与模型无关的协作指令微调框架,使大规模语言模型(LLMs)能够为生成的推荐向用户提供全面的解释。我们的目标是复制原始论文的结果,但使用Llama 3作为评估的LLM而不是GPT-3.5-turbo。我们基于Ma等人(2024)提供的源代码实现了这一目标。我们的工作通过修改XRec混合专家模块中的输入嵌入或删除输出嵌入来扩展了原论文。 根据我们的研究结果,XRec能够有效生成个性化的解释,并且其稳定性在引入协作信息后得到了提高。然而,XRec并不总是在所有评价指标上都优于所有的基线模型。我们进一步的分析强调了混合专家嵌入在塑造解释结构中的重要性,展示了协作信号如何与语言建模相互作用。 通过我们的工作,我们提供了一个开源评估实现,以增强研究人员和实践者的访问能力。完整的代码仓库可以在以下链接找到:[此URL](请替换为实际的URL)。
URL
https://arxiv.org/abs/2510.06275