Abstract
Machine learning models have emerged as powerful tools in physics and engineering. In this work, we use an autoencoder with latent space penalization to discover approximate finite-dimensional manifolds of two canonical partial differential equations. We test this method on the Kuramoto-Sivashinsky (K-S), Korteweg-de Vries (KdV), and damped KdV equations. We show that the resulting optimal latent space of the K-S equation is consistent with the dimension of the inertial manifold. We then uncover a nonlinear basis representing the manifold of the latent space for the K-S equation. The results for the KdV equation show that it is more difficult to recover a reduced latent space, which is consistent with the truly infinite-dimensional dynamics of the KdV equation. In the case of the damped KdV equation, we find that the number of active dimensions decreases with increasing damping coefficient.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.