LSTM ref:
理解 LSTM 网络 (Understanding LSTM Networks by colah)
本文主要做的是文本翻译。
核心思想:结合stacked LSTM结果(机器学习方法)和规则结果(统计方法)做出更为合理的翻译。
结合方法:LSTM会得出候选word的score,规则也会算出一个score(计算的过程中还用到了LSTM的中间量,attention,词之间的关系)。然后线性加权得出最优解。
启发:这种结合方式是不是也可以用在医疗数据中。
文献题目 | 去谷歌学术搜索 | ||||||||||
Phrase Table as Recommendation Memory for Neural Machine Translation | |||||||||||
文献作者 | Yang Zhao | ||||||||||
文献发表年限 | 2018 | ||||||||||
文献关键字 | |||||||||||
stacked LSTM; 规则结合ML; | |||||||||||
摘要描述 | |||||||||||
Neural Machine Translation (NMT) has drawn much attention due to its promising translation performance recently. However, several studies indicate that NMT often generates fluent but unfaithful translations. In this paper, we propose a method to alleviate this problem by using a phrase table as recommendation memory. The main idea is to add bonus to words worthy of recommendation, so that NMT can make correct predictions. Specifically, we first derive a prefix tree to accommodate all the candidate target phrases by searching the phrase translation table according to the source sentence. Then, we construct a recommendation word set by matching between candidate target phrases and previously translated target words by NMT. After that, we determine the specific bonus value for each recommendable word by using the attention vector and phrase translation probability. Finally, we integrate this bonus value into NMT to improve the translation results. The extensive experiments demonstrate that the proposed methods obtain remarkable improvements over the strong attention- based NMT. |