Abstract
Recent advancements in recommender systems have focused on leveraging Large Language Models (LLMs) to improve user preference modeling, yielding promising outcomes. However, current LLM-based approaches struggle to fully leverage user behavior sequences, resulting in suboptimal preference modeling for personalized recommendations. In this study, we propose a novel Counterfactual Fine-Tuning (CFT) method to address this issue by explicitly emphasizing the role of behavior sequences when generating recommendations. Specifically, we employ counterfactual reasoning to identify the causal effects of behavior sequences on model output and introduce a task that directly fits the ground-truth labels based on these effects, achieving the goal of explicit emphasis. Additionally, we develop a token-level weighting mechanism to adjust the emphasis strength for different item tokens, reflecting the diminishing influence of behavior sequences from earlier to later tokens during predicting an item. Extensive experiments on real-world datasets demonstrate that CFT effectively improves behavior sequence modeling. Our codes are available at this https URL.
Abstract (translated)
近期,推荐系统的研究重点在于利用大型语言模型(LLMs)来改进用户偏好建模,取得了令人鼓舞的成果。然而,当前基于LLM的方法在充分利用用户行为序列方面存在困难,导致个性化推荐中的偏好建模效果不佳。本研究提出了一种新颖的反事实微调(CFT)方法,通过显式强调行为序列的作用来解决这一问题。具体来说,我们采用反事实推理来识别行为序列对模型输出的因果效应,并引入一项任务,直接基于这些效应拟合真实标签,以实现显式的强调效果。此外,我们开发了一种token级别的加权机制,用于调整不同项目token的强调强度,在预测一个项目时反映出从早期到后期的行为序列影响逐渐减弱的现象。在实际数据集上的广泛实验表明,CFT能够有效改进行为序列建模。我们的代码可在以下链接获取:[此 https URL]。
URL
https://arxiv.org/abs/2410.22809