本文着重于动态得对推荐结果进行解释。其基本思路是:(用item的comments中的句子作为解释)每一个item都有许多的comments,将这些comments以句子的形式组织到一起,然后利用attention mechinism确定每一个句子同user representation当前表达的权重。权重越大,就越能当作对用户当前物品的推荐结果的解释。
至于用户当前的user representation,就利用了GRU的思路,输入是根据用户访问时间排序的item的信息,输出/状态则为用户的表达。这个表达可以去影响句子的权重。
有意思的是,本文在GRU的基础上,更好的融入了时间差信息,并且把这个信息直接加入到了GRU到网络结果中,即T-GRU。其中不足的是,这个时间信息有一个pre-define的vector。(How to pre-define, setting all elements to a value which is larger than 0)
本文要通过雇佣员工的方式来考察推荐解释性的好坏,值得借鉴。
文献题目 | 去谷歌学术搜索 | ||||||||||
Dynamic Explainable Recommendation based on Neural Attentive Models | |||||||||||
文献作者 | Xu Chen; Yongfeng Zhang | ||||||||||
文献发表年限 | 2019 | ||||||||||
文献关键字 | |||||||||||
CNN; GRU; Time-aware GRU; from static to dynamic explainable; attention mechinism; 权重可视化 visualization;可解释; interpret; Neural Attentive Model for Explainable Recommendation by Learning User Dynamic Preference | |||||||||||
摘要描述 | |||||||||||
Providing explanations in a recommender system is getting more and more attention in both industry and research communities. Most existing explainable recommender models regard user preferences as invariant to generate static explanations. However, in real scenarios, a user’s preference is always dynamic, and she may be interested in different product features at different states. The mismatching between the explanation and user preference may degrade costumers’ satisfaction, confidence and trust for the recommender system. With the desire to fill up this gap, in this paper, we build a novel Dynamic Explainable Recommender (called DER) for more accurate user modeling and explanations. In specific, we design a time-aware gated recurrent unit (GRU) to model user dynamic preferences, and profile an item by its review information based on sentence-level convolutional neural network (CNN). By attentively learning the important review information according to the user current state, we are not only able to improve the recommendation performance, but also can provide explanations tailored for the users’ current preferences. We conduct extensive experiments to demonstrate the superiority of our model for improving recommendation performance. And to evaluate the explainability of our model, we first present examples to provide intuitive analysis on the highlighted review information, and then crowd-sourcing based evaluations are conducted to quantitatively verify our model’s superiority. |