这篇文章的核心思想:在cross-domain中考虑了users' current prefrence。
(1)建立cross-domain relations: 每个用户interation都会对应一个textual data,因此可以为每个interation提取多个不同的主题。把这些主题(包括所有domains)放在一起,统计频数,形成频数向量。interation的集合为一个session。
(2)为每一个session都利用(1)的过程学出一个表达,然后利用attention model组合起来
(3)然后把当前session学出的表达,以及(2)中学出的表达放到LSTM当中,学出最终表达,做预测
文献题目 | 去谷歌学术搜索 | ||||||||||
LSTM Networks for Online Cross-Network Recommendations | |||||||||||
文献作者 | Dilruk Perera and Roger Zimmermann | ||||||||||
文献发表年限 | 2018 | ||||||||||
文献关键字 | |||||||||||
Cross-domain; LSTM: attention model; next-item recommendation | |||||||||||
摘要描述 | |||||||||||
Cross-network recommender systems use auxiliary information from multiple source networks to create holistic user profiles and improve recommendations in a target network. However, we find two major limitations in existing cross-network solutions that reduce overall recommender performance. Existing models (1) fail to capture complex non-linear relationships in user interactions, and (2) are designed for offline settings hence, not updated online with incoming interactions to capture the dynamics in the recommender environment. We propose a novel multi-layered Long Short-Term Memory (LSTM) network based online solution to mitigate these issues. The proposed model contains three main extensions to the standard LSTM: First, an attention gated mechanism to capture long-term user preference changes. Second, a higher order interaction layer to alleviate data sparsity. Third, time aware LSTM cell gates to capture irregular time intervals between user interactions. We illustrate our solution using auxiliary information from Twitter and Google Plus to improve recommendations on YouTube. Extensive experiments show that the proposed model consistently outperforms state-of-the-art in terms of accuracy, diversity and novelty. |