本文主要的思路:融合news content information(title+article)和用户历史点击行为序列。
其中,前者利用CNN (PCNN: parallel CNN,针对title和article文本各自学习,最后concatenate);后者利用RNN,与随机化一个news 的表达不同,这里的表达全部从CNN学习得到。这里认为是news 的静态表达。
(1)用户当前的偏好:ANN(attention model):所有历史点击的news as input, guided by target/candidate news(不同于guided by an additional user representation,本文中没有static user representation),加权组合历史news 的表达。
(2)sequential information of user’s clicking selection: 对于每一个时间戳t,所有在此之前点击的news as input, guided by current news. 然后把所有t中得到的表达放到一个矩阵,再对这个矩阵进行向量化操作。
(3)将(1)和(2)中得到的向量拼接到一起,feed to a fully-connection NN,得到最后的表达,本文认为是另一个news representation (动态表达)。
(4)最优化学习策略:每一个candidate news都有一个静态的表达from PCNN,以及一个融合了用户历史信息的动态表达,如果这个candidate news是positive example,我们希望静态表达和动态表达很像;反之依然。
文献题目 | 去谷歌学术搜索 | ||||||||||
DAN : Deep Attention Neural Network for News Recommendation | |||||||||||
文献作者 | Qiannan Zhu; | ||||||||||
文献发表年限 | 2019 | ||||||||||
文献关键字 | |||||||||||
CNN; RNN; Attention Model; 不一样的objective function; content-based; dynamic; 图画的不错;Adressa-1week; Adressa-10week | |||||||||||
摘要描述 | |||||||||||
With the rapid information explosion of news, making personalized news recommendation for users becomes an increasingly challenging problem. Many existing recommendation methods that regard the recommendation procedure as the static process, have achieved better recommendation performance. However, they usually fail with the dynamic diversity of news and user’s interests, or ignore the importance of sequential information of user’s clicking selection. In this paper, taking full advantages of convolution neural network (CNN), recurrent neural network (RNN) and attention mechanism, we propose a deep attention neural network DAN for news recommendation. Our DAN model presents to use attention-based parallel CNN for aggregating user’s interest features and attention-based RNN for capturing rich- er hidden sequential features of user’s clicks, and combines these features for new recommendation. We conduct experiment on real-world news data sets, and the experimental results demonstrate the superiority and effectiveness of our proposed DAN model. |