核心部分:基于神经网络提出了一种Encoder和Decoder方法;此方法的思想与self-attention NN,KV-MemNN, Multi-head attention 有一定的相似性,都考虑了集合内部元素之间的影响和联系。
文献题目 | 去谷歌学术搜索 | ||||||||||
Set Transformer: A framework for Attention-based Permutation-Invariant Neural Networks | |||||||||||
文献作者 | |||||||||||
文献发表年限 | 2019 | ||||||||||
文献关键字 | |||||||||||
self-attention | |||||||||||
摘要描述 | |||||||||||
Many machine learning tasks such as multiple instance learning, 3D shape recognition, and few- shot image classification are defined on sets of instances. Since solutions to such problems do not depend on the order of elements of the set, models used to address them should be permutation invariant. We present an attention-based neural network module, the Set Transformer, specifically designed to model interactions among elements in the input set. The model consists of an encoder and a decoder, both of which rely on attention mechanisms. In an effort to reduce computational complexity, we introduce an attention scheme inspired by inducing point methods from sparse Gaussian process literature. It reduces the computation time of self-attention from quadratic to linear in the number of elements in the set. We show that our model is theoretically attractive and we evaluate it on a range of tasks, demonstrating the state-of-the-art performance compared to recent methods for set-structured data. |