Abstract

In analyzing and assessing the condition of dynamical systems, it is necessary to account for nonlinearity. Recent advances in computation have rendered previously computationally infeasible analyses readily executable on common computer hardware. However, in certain use cases, such as uncertainty quantification or high precision real-time simulation, the computational cost remains a challenge. This necessitates the adoption of reduced-order modelling methods, which can reduce the computational toll of such nonlinear analyses. In this work, we propose a reduction scheme relying on the exploitation of an autoencoder as means to infer a latent space from output-only response data. This latent space, which in essence approximates the system's nonlinear normal modes (NNMs), serves as an invertible reduction basis for the nonlinear system. The proposed machine learning framework is then complemented via the use of long short term memory (LSTM) networks in the reduced space. These are used for creating an nonlinear reduced-order model (ROM) of the system, able to recreate the full system's dynamic response under a known driving input.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call