Abstract

It is well known that canonical recurrent neural networks (RNNs) face limitations in learning long-term dependencies, which have been addressed by memory structures in long short-term memory (LSTM) networks. Neural Turing machines (NTMs) are a variant of RNNs that implement the notion of programmable memory with neural network controllers that can learn simple algorithmic tasks. Matrix neural networks, on the other hand, feature matrix representations which inherently has the potential to preserve the spatial structure of data when compared to canonical neural networks that use only vector-based representation. One may then argue that neural networks with matrix representations may have the potential to provide better memory capacity. In this paper, we define and study a probabilistic notion of memory capacity based on Fisher information for matrix-based RNNs. We find bounds on memory capacity for such networks under various hypotheses and compare them with their conventional (vector) counterparts. In particular, we show that the memory capacity of such networks is bounded by N2, for N×N state matrix which generalizes and provides similar results for vector networks. We also show and analyze the increase in memory capacity for such networks which is introduced when one exhibits an external state memory, such as NTMs. Consequently, we construct NTMs with RNN controllers with matrix-based representation of external memory, leading us to introduce Matrix NTMs. We demonstrate the performance of this class of memory networks under certain algorithmic learning tasks such as copying and recall and compare it with Matrix RNNs. We find an improvement in the performance of Matrix NTMs by the addition of external memory, in comparison to Matrix RNNs.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call