1)Key-value Memory Model 源于End-to-End Model(在End-to-End中,the vector of Key = the vector of Value)
2) 本文中,Key embedding和Value embedding是由pre-defined的特征向量乘以一个参数矩阵A或B的,参数是A,B,Ri,其中A可以=B
3)本文中的一个Hop的概念相当于一轮迭代,每一轮Hop,更新一次keys与input之间的距离关系,
4)每一个Hop就有一个output,此output作为上次的学习结果传递给下一个Hop
5)softmax function: 归一化函数
问题一:如何体现long-term和short-term关系的(似乎并没有融入时间因素)
问题二:用于recommendation中,如何构造memory(key,value)?
文献题目 | 去谷歌学术搜索 | ||||||||||
Key-Value Memory Networks for Directly Reading Documents | |||||||||||
文献作者 | Alexander H. Miller | ||||||||||
文献发表年限 | 2016 | ||||||||||
文献关键字 | |||||||||||
Key-Value Memory Model, End-to-End, softmax | |||||||||||
摘要描述 | |||||||||||
Directly reading documents and being able to answer questions from them is an unsolved challenge. To avoid its inherent difficulty, question answering (QA) has been directed towards using Knowledge Bases (KBs) instead, which has proven effective. Unfortunately KBs often suffer from being too restrictive, as the schema cannot support certain types of answers, and too sparse, e.g. Wikipedia contains much more information than Freebase. In this work we introduce a new method, Key-Value Memory Networks, that makes reading documents more viable by utilizing different encodings in the addressing and output stages of the memory read operation. To compare using KBs, information extraction or Wikipedia documents directly in a single framework we construct an analysis tool, WIKIMOVIES, a QA dataset that contains raw text alongside a preprocessed KB, in the domain of movies. Our method reduces the gap between all three settings. It also achieves state-of-the-art results on the existing WIKIQA benchmark. |