Abstract

Although projection-based reduced-order models (ROMs) for parameterized nonlinear dynamical systems have demonstrated exciting results across a range of applications, their broad adoption has been limited by their intrusivity: implementing such a reduced-order model typically requires significant modifications to the underlying simulation code. To address this, we propose a method that enables traditionally intrusive reduced-order models to be accurately approximated in a non-intrusive manner. Specifically, the approach approximates the low-dimensional operators associated with projection-based reduced-order models (ROMs) using modern machine-learning regression techniques. The only requirement of the simulation code is the ability to export the velocity given the state and parameters; this functionality is used to train the approximated low-dimensional operators. In addition to enabling nonintrusivity, we demonstrate that the approach also leads to very low computational complexity, achieving up to 10^3{times } in run time. We demonstrate the effectiveness of the proposed technique on two types of PDEs. The domain of applications include both parabolic and hyperbolic PDEs, regardless of the dimension of full-order models (FOMs).

Highlights

  • Modern computational architectures have enabled the detailed numerical simulation of incredibly complex physical and engineering systems at a vast range of scales in both space and time [31]

  • Assuming the considered regression model generates bounded in reduced space, we examine the boundedness of the surrogate full-order models (FOMs) on the time evolution of the states along the trajectory

  • The results show that support vector regression (SVR) based models, e.g. SVR2 and SVR3, yield the smallest relative errors, the computational cost is more expensive than the FOM

Read more

Summary

Introduction

Modern computational architectures have enabled the detailed numerical simulation of incredibly complex physical and engineering systems at a vast range of scales in both space and time [31]. We investigate and compare several emerging techniques from machine learning, i.e. applied data-driven optimization, for non-intrusive reduced-order modeling. Intrusive model reduction methods, based on a working and decomposable numerical simulation of the governing equations, provide the most general and widely used set of techniques Foremost in this arsenal is the Galerkin projection of the governing equations onto a low-dimensional linear subspace, usually spanned by orthogonal modes, such as Fourier, or data-driven modes from proper orthogonal decomposition (POD) [3,6,23,39]. After applying time integration to the regression-based ROM, we compute the relative error of the proposed models as a function of time We investigate both Newton–Raphson, fixed-point iteration in backward Euler and 4th-order Runge–Kutta in explicit methods.

Method
Conclusions
Findings
10-6 SVR2 SVR3 SVRrbf RF Boosting kNN VKOGA Sindy
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call