Abstract

Continuous time recurrent neural networks (CTRNN) are systems of coupled ordinary differential equations that are simple enough to be insightful for describing learning and computation, from both biological and machine learning viewpoints. We describe a direct constructive method of realising finite state input-dependent computations on an arbitrary directed graph. The constructed system has an excitable network attractor whose dynamics we illustrate with a number of examples. The resulting CTRNN has intermittent dynamics: trajectories spend long periods of time close to steady-state, with rapid transitions between states. Depending on parameters, transitions between states can either be excitable (inputs or noise needs to exceed a threshold to induce the transition), or spontaneous (transitions occur without input or noise). In the excitable case, we show the threshold for excitability can be made arbitrarily sensitive.

Highlights

  • It is natural to try to understand computational properties of neural systems through the paradigm of network dynamical systems, where a number of dynamically simple units interact to give computation as an emergent property of the system

  • The current paper demonstrates that Continuous time recurrent neural networks (CTRNN) dynamics is sufficiently rich to realise excitable networks with arbitrary graph topology, by specifying appropriate connection weights

  • As we will see in the examples which follow, the parameters are chosen such that the dynamics are close to a saddle-node bifurcation; more precisely such that the system is near a codimension N bifurcation where there are N saddle-nodes of the equilibria ξk

Read more

Summary

Introduction

It is natural to try to understand computational properties of neural systems through the paradigm of network dynamical systems, where a number of dynamically simple units A CTRNN is a set of differential equations each with one scalar variable that represents the level of activation of a neuron, and feedback via a saturating nonlinearity or “activation function” These models have been extensively investigated in the past decades as simple neurally-inspired systems that can (without input) have complex dynamics by virtue of the nonlinearities present [6]. We prove this (with details in Appendix B) for the case of the piecewise affine function φP.

Construction of a CTRNN with a network attractor
Realization of arbitrary directed graphs as network attractors
Excitable networks for smooth nonlinearities
Two vertex graph
A ten node network
Discussion
A Definition of excitable network
Case 1
Absence of excitable connections for edges absent
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call