Abstract

Recurrent neural networks unlike feed-forward networks are able to process inputs with time context. The key role in this process is played by the dynamics of the network, which transforms input data to the recurrent layer states. Several authors have described and analyzed dynamics of small sized recurrent neural networks with two or three hidden units. In our work we introduce techniques that allow to visualize and analyze the dynamics of large recurrent neural networks with dozens units, reveal both stable and unstable points (attractors and saddle points), which are important to understand the principles of successful task processing. As a practical example of this approach, dynamics of the simple recurrent network trained by two different training algorithms on context-free language anbnwas studied.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call