Abstract

Echo State Networks (ESNs) are a class of single-layer recurrent neural networks that have enjoyed recent attention. In this paper we prove that a suitable ESN, trained on a series of measurements of an invertible dynamical system, induces a C1 map from the dynamical system’s phase space to the ESN’s reservoir space. We call this the Echo State Map. We then prove that the Echo State Map is generically an embedding with positive probability.Under additional mild assumptions, we further conjecture that the Echo State Map is almost surely an embedding. For sufficiently large, and specially structured, but still randomly generated ESNs, we prove that there exists a linear readout layer that allows the ESN to predict the next observation of a dynamical system arbitrarily well. Consequently, if the dynamical system under observation is structurally stable then the trained ESN will exhibit dynamics that are topologically conjugate to the future behaviour of the observed dynamical system.Our theoretical results connect the theory of ESNs to the delay-embedding literature for dynamical systems, and are supported by numerical evidence from simulations of the traditional Lorenz equations. The simulations confirm that, from a one dimensional observation function, an ESN can accurately infer a range of geometric and topological features of the dynamics such as the eigenvalues of equilibrium points, Lyapunov exponents and homology groups.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call