Abstract

A central challenge in data-driven model discovery is the presence of hidden, or latent, variables that are not directly measured but are dynamically important. Takens’ theorem provides conditions for when it is possible to augment partial measurements with time delayed information, resulting in an attractor that is diffeomorphic to that of the original full-state system. This diffeomorphism is typically unknown, and learning the dynamics in the embedding space has remained an open challenge for decades. Here, we design a deep autoencoder network to learn a coordinate transformation from the delay embedded space into a new space, where it is possible to represent the dynamics in a sparse, closed form. We demonstrate this approach on the Lorenz, Rössler and Lotka–Volterra systems, as well as a Lorenz analogue from a video of a chaotic waterwheel experiment. This framework combines deep learning and the sparse identification of nonlinear dynamics methods to uncover interpretable models within effective coordinates.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.