本文核心:
(1) Long-term + short-term -> RNN(LSTM) + MF; 结合方式比较简单和常规,结合两个模型中的隐向量
(2)学习方法 GAN: 利用(1)中结合的表达,学习G和D model;
(3)在学习G时,采用了强化学习的一点技巧。
GAN 的思想:
(1)让G产生的item(趋向于正样本)与采样的正样本近可能接近
文献题目 | 去谷歌学术搜索 | ||||||||||
PLASTIC: Prioritize Long and Short-term Information in Top-n Recommendation using Adversarial Training | |||||||||||
文献作者 | Wei Zhao | ||||||||||
文献发表年限 | 2018 | ||||||||||
文献关键字 | |||||||||||
Gan; Generative Adversarial Network; reinforcement learning; LSTM; MF | |||||||||||
摘要描述 | |||||||||||
Recommender systems provide users with ranked lists of items based on individual’s preferences and constraints. Two types of models are commonly used to generate ranking results: long-term models and session-based models. While long-term mod- els represent the interactions between users and items that are supposed to change slowly across time, session-based models encode the information of users’ interests and changing dynamics of items’ attributes in short terms. In this paper, we propose a PLASTIC model, Prioritizing Long And Short- Term Information in top-n reCommendation us- ing adversarial training. In the adversarial process, we train a generator as an agent of reinforcement learning which recommends the next item to a user sequentially. We also train a discriminator which attempts to distinguish the generated list of items from the real list recorded. Extensive experiments show that our model exhibits significantly better performances on two widely used datasets.1 |