Abstract

In this work, we study the dynamical properties of a machine learning technique called reservoir computing in order to gain insight into how representations of chaotic signals are encoded through learning. We train the reservoir on individual chaotic Lorenz signals. The Lorenz system is characterized by a set of equations and known to have three fixed points, all of which are unstable in the chaotic regime of the strange attractor. Exploration of the fixed points of the reservoir whose outputs are trained allows us to understand whether inherent Lorenz dynamics are transposed onto reservoir dynamics during learning. We do so by using a novel fixed point finding technique called directional fibers. Directional fibers are mathematical objects that systematically locate fixed points in a high dimensional space, and are found to be competitive and complementary with other traditional approaches. We find that the reservoir, after training of output weights, contains a higher dimensional projection of the Lorenz fixed points with matching stability, even though the training data did not include the fixed points. This tells us that the reservoir does indeed learn dynamical properties of the Lorenz attractor. We also find that the directional fiber also identifies additional fixed points in the reservoir space outside the projected Lorenz attractor region; these amplify perturbations during prediction and play a role in failure of long-term time series prediction.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call