Abstract

Adding memory to artificial intelligence systems in an effective way has been addressed by researchers for many years. Recurrent neural networks and long short-term memories (LSTMs), among other neural network systems, have some inherent memory capabilities. Recently, in memory augmented neural networks, such as neural Turing machine (NTM) and its variants, a separate memory module is implemented, which can be accessed via read and write heads. Despite its capabilities in simple algorithmic tasks, such as copying and repeat copying, neural Turing machines fail when doing complex tasks with long-term dependencies due to their limited memory capacity. In this paper, we propose a new memory module in which data storing and access mechanisms are based on a graph-based neural structure rather than a matrix model. This is inspired by the human memory system, in which memories are stored via synapses (connections between neurons) and are recalled through a path passing from different neuronal networks. Differentiable mechanisms are designed for this graph-based neural memory so that it can be trained via backpropagation. The proposed structure is used to solve tasks with long-time dependencies and operations on the input sequences as well as the bAbI question answering dataset. The analysis shows that the proposed memory system can help solve these tasks better than NTM and LSTM in terms of the convergence speed and the final error

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call