本文motivation:
global的aspect set和local aspect set应该是不一样的。(可以用两个不同的矩阵分解达到这个要求,实际上本文的解决方法也差不多)
本质上,本文在传统truncated SVD的基础上,融合了global和local两种分解机制。最后将两者的估计评分线性相加。
所谓的global跟一般的矩阵分解没什么区别;
local的做法:先将用户分组。然后将对每个组进行矩阵分解。
本文有意思的一点是结合global和local的方法:引入weight参数g_u: r_ui = g_u*global score + (1-g_u)*local score; 对r_ui 求导,并令导数为0.就可以得到global和local中的分解参数同g_u之间的关系。以此更新g_u。奇怪的是,g_u只用来控制什么时候结束循环,最后在做预测的时候也没用到g_u.文中的说法:g_u is enclosed inside the user latent factors.
文献题目 | 去谷歌学术搜索 | ||||||||||
Local Latent Space Models for Top-N Recommendation | |||||||||||
文献作者 | George Karypis | ||||||||||
文献发表年限 | 2018 | ||||||||||
文献关键字 | |||||||||||
摘要描述 | |||||||||||
Users’ behaviors are driven by their preferences across various aspects of items they are potentially interested in purchasing, view- ing, etc. Latent space approaches model these aspects in the form of latent factors. Although such approaches have been shown to lead to good results, the aspects that are important to different users can vary. In many domains, there may be a set of aspects for which all users care about and a set of aspects that are specific to different subsets of users. To explicitly capture this, we consider models in which there are some latent factors that capture the shared aspects and some user subset specific latent factors that capture the set of aspects that the different subsets of users care about. In particular, we propose two latent space models: rGLSVD and sGLSVD, that combine such a global and user subset specific sets of latent factors. The rGLSVD model assigns the users into different subsets based on their rating patterns and then estimates a global and a set of user subset specific local models whose number of latent dimensions can vary. The sGLSVD model estimates both global and user subset specific local models by keeping the number of latent dimensions the same among these models but optimizes the grouping of the users in order to achieve the best approximation. Our experiments on various real-world datasets show that the proposed approaches significantly outperform state-of-the-art latent space top-N recommendation approaches. |