Abstract

A central challenge in data-driven model discovery is the presence of hidden, or latent, variables that are not directly measured but are dynamically important. Takens’ theorem provides conditions for when it is possible to augment partial measurements with time delayed information, resulting in an attractor that is diffeomorphic to that of the original full-state system. This diffeomorphism is typically unknown, and learning the dynamics in the embedding space has remained an open challenge for decades. Here, we design a deep autoencoder network to learn a coordinate transformation from the delay embedded space into a new space, where it is possible to represent the dynamics in a sparse, closed form. We demonstrate this approach on the Lorenz, Rössler and Lotka–Volterra systems, as well as a Lorenz analogue from a video of a chaotic waterwheel experiment. This framework combines deep learning and the sparse identification of nonlinear dynamics methods to uncover interpretable models within effective coordinates.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call