Paper Reading AI Learner

Modeling Temporal Positive and Negative Excitation for Sequential Recommendation

2024-10-29 13:02:11
Chengkai Huang, Shoujin Wang, Xianzhi Wang, Lina Yao

Abstract

Sequential recommendation aims to predict the next item which interests users via modeling their interest in items over time. Most of the existing works on sequential recommendation model users' dynamic interest in specific items while overlooking users' static interest revealed by some static attribute information of items, e.g., category, or brand. Moreover, existing works often only consider the positive excitation of a user's historical interactions on his/her next choice on candidate items while ignoring the commonly existing negative excitation, resulting in insufficient modeling dynamic interest. The overlook of static interest and negative excitation will lead to incomplete interest modeling and thus impede the recommendation performance. To this end, in this paper, we propose modeling both static interest and negative excitation for dynamic interest to further improve the recommendation performance. Accordingly, we design a novel Static-Dynamic Interest Learning (SDIL) framework featured with a novel Temporal Positive and Negative Excitation Modeling (TPNE) module for accurate sequential recommendation. TPNE is specially designed for comprehensively modeling dynamic interest based on temporal positive and negative excitation learning. Extensive experiments on three real-world datasets show that SDIL can effectively capture both static and dynamic interest and outperforms state-of-the-art baselines.

Abstract (translated)

顺序推荐的目标是通过建模用户对项目随时间的兴趣来预测下一个可能感兴趣的项目。现有的大部分工作都集中在模型化用户在特定项目上的动态兴趣,而忽视了某些静态属性信息(如类别或品牌)揭示的用户的静态兴趣。此外,现有研究往往只考虑用户历史交互对候选项目的积极激励作用,忽略了普遍存在的消极激励,导致动态兴趣的建模不充分。忽略静态兴趣和消极激励会导致兴趣模型不完整,进而影响推荐性能。为此,本文提出同时建模静态兴趣与消极激励以进一步提升动态兴趣,并改进推荐效果。我们设计了一个名为“静态-动态兴趣学习”(SDIL)的新框架,该框架具备一个新颖的“时间正负激励建模”(TPNE)模块,用于准确地进行顺序推荐。TPNE 特别是为了基于时间上的正负激励学习全面模型化动态兴趣而设计的。在三个真实世界的数据集上进行了广泛的实验,结果显示 SDIL 能够有效捕捉静态和动态兴趣,并超越了最先进的基线方法。

URL

https://arxiv.org/abs/2410.22013

PDF

https://arxiv.org/pdf/2410.22013.pdf


Tags
3D Action Action_Localization Action_Recognition Activity Adversarial Agent Attention Autonomous Bert Boundary_Detection Caption Chat Classification CNN Compressive_Sensing Contour Contrastive_Learning Deep_Learning Denoising Detection Dialog Diffusion Drone Dynamic_Memory_Network Edge_Detection Embedding Embodied Emotion Enhancement Face Face_Detection Face_Recognition Facial_Landmark Few-Shot Gait_Recognition GAN Gaze_Estimation Gesture Gradient_Descent Handwriting Human_Parsing Image_Caption Image_Classification Image_Compression Image_Enhancement Image_Generation Image_Matting Image_Retrieval Inference Inpainting Intelligent_Chip Knowledge Knowledge_Graph Language_Model LLM Matching Medical Memory_Networks Multi_Modal Multi_Task NAS NMT Object_Detection Object_Tracking OCR Ontology Optical_Character Optical_Flow Optimization Person_Re-identification Point_Cloud Portrait_Generation Pose Pose_Estimation Prediction QA Quantitative Quantitative_Finance Quantization Re-identification Recognition Recommendation Reconstruction Regularization Reinforcement_Learning Relation Relation_Extraction Represenation Represenation_Learning Restoration Review RNN Robot Salient Scene_Classification Scene_Generation Scene_Parsing Scene_Text Segmentation Self-Supervised Semantic_Instance_Segmentation Semantic_Segmentation Semi_Global Semi_Supervised Sence_graph Sentiment Sentiment_Classification Sketch SLAM Sparse Speech Speech_Recognition Style_Transfer Summarization Super_Resolution Surveillance Survey Text_Classification Text_Generation Time_Series Tracking Transfer_Learning Transformer Unsupervised Video_Caption Video_Classification Video_Indexing Video_Prediction Video_Retrieval Visual_Relation VQA Weakly_Supervised Zero-Shot