Abstract
An important task for a recommender system to provide interpretable explanations for the user. This is important for the credibility of the system. Current interpretable recommender systems tend to focus on certain features known to be important to the user and offer their explanations in a structured form. It is well known that user generated reviews and feedback from reviewers have strong leverage over the users' decisions. On the other hand, recent text generation works have been shown to generate text of similar quality to human written text, and we aim to show that generated text can be successfully used to explain recommendations. In this paper, we propose a framework consisting of popular review-oriented generation models aiming to create personalised explanations for recommendations. The interpretations are generated at both character and word levels. We build a dataset containing reviewers' feedback from the Amazon books review dataset. Our cross-domain experiments are designed to bridge from natural language processing to the recommender system domain. Besides language model evaluation methods, we employ DeepCoNN, a novel review-oriented recommender system using a deep neural network, to evaluate the recommendation performance of generated reviews by root mean square error (RMSE). We demonstrate that the synthetic personalised reviews have better recommendation performance than human written reviews. To our knowledge, this presents the first machine-generated natural language explanations for rating prediction.
Abstract (translated)
推荐系统的一项重要任务是为用户提供可解释的解释。这对系统的可信度很重要。当前可解释的推荐系统倾向于关注已知对用户重要的某些特征并以结构化形式提供其解释。众所周知,用户生成的评论和评论者的反馈对用户的决策具有很强的影响力。另一方面,最近的文本生成工作已经被证明可以生成与人类书面文本质量相似的文本,我们的目的是表明生成的文本可以成功地用于解释建议。 在本文中,我们提出了一个框架,该框架由流行的面向审阅的生成模型组成,旨在为推荐创建个性化的解释。解释是在字符和单词级别生成的。我们构建了一个数据集,其中包含来自亚马逊书籍评论数据集的评论者反馈。我们的跨域实验旨在从自然语言处理到推荐系统领域。除了语言模型评估方法,我们采用DeepCoNN,一种使用深度神经网络的新型面向评论的推荐系统,通过均方根误差(RMSE)评估生成的评论的推荐性能。我们证明合成个性化评论比人类书面评论具有更好的推荐性能。据我们所知,这为评级预测提供了第一个机器生成的自然语言解释。
URL
https://arxiv.org/abs/1807.06978