Abstract

Continuous-time recurrent neural networks are widely used as models of neural dynamics and also have applications in machine learning. But their dynamics are not yet well understood, especially when they are driven by external stimuli. In this article, we study the response of stable and unstable networks to different harmonically oscillating stimuli by varying a parameter ρ, the ratio between the timescale of the network and the stimulus, and use the dimensionality of the network’s attractor as an estimate of the complexity of this response. Additionally, we propose a novel technique for exploring the stationary points and locally linear dynamics of these networks in order to understand the origin of input-dependent dynamical transitions. Attractors in both stable and unstable networks show a peak in dimensionality for intermediate values of ρ, with the latter consistently showing a higher dimensionality than the former, which exhibit a resonance-like phenomenon. We explain changes in the dimensionality of a network’s dynamics in terms of changes in the underlying structure of its vector field by analysing stationary points. Furthermore, we uncover the coexistence of underlying attractors with various geometric forms in unstable networks. As ρ is increased, our visualisation technique shows the network passing through a series of phase transitions with its trajectory taking on a sequence of qualitatively distinct figure-of-eight, cylinder, and spiral shapes. These findings bring us one step closer to a comprehensive theory of this important class of neural networks by revealing the subtle structure of their dynamics under different conditions.

Highlights

  • Continuous-time recurrent neural networks are prevalent in multiple areas of neural and cognitive computation

  • It has long been proven that these networks can approximate any dynamical system to arbitrary precision [11, 12], but further empirical study is needed to understand the practicalities of such approximations and how network dynamics are shaped by incoming stimuli [13]

  • In “Methods”, we present a novel approach to stationary point dynamics and introduce a technique to measure attractor dimensionality adapted from Tajima et al [17]

Read more

Summary

Introduction

Continuous-time recurrent neural networks are prevalent in multiple areas of neural and cognitive computation They have been successfully used as models of cortical dynamics and function [1, 2] and have found application in machine learning [3,4,5,6,7,8]. A characteristic phenomenon exhibited by such networks is a qualitative change in their dynamics—depending on the precise values of some of the parameters—commonly referred to as a bifurcation This phenomenon has been studied analytically for the case of networks with less than ten neurons [14, 15] but to take an analytical approach to larger networks comprising hundreds of neurons would be very challenging

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call