本文从两个方面,即long- and short-term为用户的偏好建模。
(1)long-term: 传统的MF,用户表达p,物品表达q
(2)short-term: 又分为两个方面(为这部分设置了额外的物品表达m):individual level 和union level
1)individual level: 通过用户最近访问过的n个物品的表达m够成的矩阵E,利用attention model构建一个新的表达e(向量)。
2)union level:利用residual network (ResNet)得到多层相联系的表达,如from p^1 to p^2 to p^l, 其中l表示层数(Eq.4),本文认为最后一层的表达即为union level.
注意:这里用户利用ResNet不仅为用户设置了多层表达,即p^1,p^2,....也为物品矩阵E设置了多层表达,E^1,E^2,...相应的也就有了最近物品的多层表达m^1,m^2,...所以这里涉及了怎么把多层表达aggregate成最终表达的问题。解法如下:
E^l={m1^l,m2^l,...} -> e^l;
{e^1,e^2,...e^l} -> e_c (Eq.6: 意思是将所有E^l中的m_i^l加权求和,这样再同目标物品m_j相乘时,相当于在m_i * m_j前加个权重,这也是所谓的individula level)
ResNet(e_c) -> h_L (最后一层输出,union level)
最后,y_ui = p_u * q_i + e_c * m_i + h_L * m_i
=》这里值得学习的是:在求attention model中的weights时,不仅可以通过外部乘以一个向量(guided by a user representation)的形式,还可以直接把这个user representation同要乘的向量concatenate到一起,放到activate function当中,最后再乘以某个向量,如Eq(5)下面的那个。
文献题目 | 去谷歌学术搜索 | ||||||||||
Multi-order Attentive Ranking Model for Sequential Recommendation | |||||||||||
文献作者 | Lu Yu | ||||||||||
文献发表年限 | 2019 | ||||||||||
文献关键字 | |||||||||||
物品和物品之间的关系通过weight表示出来(解释);residual network ResNet; 讨论了temporary order在同一个session中的不重要性;attention model weight新求法; Yelp; Amazon; Movies&TV; CDs&Vinyl | |||||||||||
摘要描述 | |||||||||||
In modern e-commerce, the temporal order behind users’ transactions implies the importance of exploiting the transition dependency among items for better inferring what a user prefers to interact in “near future”. The types of interaction among items are usually divided into individual-level interaction that can stand out the transition order between a pair of items, or union-level relation between a set of items and single one. However, most of existing work only captures one of them from a single view, especially on modeling the individual-level interaction. In this paper, we propose a Multi-order Attentive Ranking Model (MARank) to unify both individual- and union-level item interaction into preference inference model from multiple views. The idea is to represent user’s short-term preference by embedding user himself and a set of present items into multi-order features from intermedia hidden status of a deep neural network. With the help of attention mechanism, we can obtain a unified embedding to keep the individual-level interactions with a linear combination of mapped items’ features. Then, we feed the aggregated embedding to a designed residual neural network to capture union-level interaction. Thorough experiments are conducted to show the features of MARank under various component settings. Furthermore experimental results on several public datasets show that MARank significantly outperforms the state-of-the-art baselines on different evaluation metrics. The source code can be found at https://github.com/voladorlu/MARank. |