Abstract

This article addresses the inference of physics models from data, from the perspectives of inverse problems and model reduction. These fields develop formulations that integrate data into physics-based models while exploiting the fact that many mathematical models of natural and engineered systems exhibit an intrinsically low-dimensional solution manifold. In inverse problems, we seek to infer uncertain components of the inputs from observations of the outputs, while in model reduction we seek low-dimensional models that explicitly capture the salient features of the input–output map through approximation in a low-dimensional subspace. In both cases, the result is a predictive model that reflects data-driven learning yet deeply embeds the underlying physics, and thus can be used for design, control and decision-making, often with quantified uncertainties. We highlight recent developments in scalable and efficient algorithms for inverse problems and model reduction governed by large-scale models in the form of partial differential equations. Several illustrative applications to large-scale complex problems across different domains of science and engineering are provided.

Highlights

  • We discuss its mathematical properties as a general framework for deriving surrogate models and present reduced models derived using proper orthogonal decomposition combined with Galerkin projection for time-dependent and parametrized systems

  • We present computation of the lowdimensional subspace using the proper orthogonal decomposition (POD) method

  • We first discuss the use of hyper-reduction methods such as discrete empirical interpolation method (DEIM) to approximate non-linear terms

Read more

Summary

PART TWO

Many PDE models of natural and engineered systems exhibit an intrinsically lowdimensional solution manifold. To address the challenges of learning models from data in the large-scale setting of PDEs, we must exploit the low-dimensional structure of the map from inputs to outputs. Here in Part 2 we pose the learning-from-data problem as an inverse problem governed by the forward (PDE) problem, and exploit the low-dimensionality of the parameter-to-observable map to efficiently and scalably recover the informed components of the model at a cost – measured in forward model solutions – that scales independent of the parameter or data dimensions. The statistical inverse problem is addressed in the Bayesian framework in Section 4; low-rank approximation of the Hessian of the log likelihood exploits the low-dimensionality, resulting in dimension-independence We begin Part 2 with a discussion of ill-posedness, and several model elliptic, parabolic and hyperbolic inverse problems intended to illustrate the underlying concepts

Ill-posedness of inverse problems
Inference of the source term in a Poisson equation
Inference of initial condition in a heat equation
Inference of initial condition in a wave equation
Inference of coefficient of a Poisson equation
Summary
Regularization framework
Inexact Newton–conjugate gradient methods
Check for sufficient descent
Bayesian framework and Laplace approximation
Bayesian formulation
Finding the MAP point
The Laplace approximation
Optimal experimental design
Computing the Hessian action
An advection–diffusion–reaction inverse problem
The gradient
The Hessian action
Case study: an inverse problem for the Antarctic ice sheet
Forward and inverse ice flow
The MAP point
Computing the Laplace approximation of the posterior
PART THREE Model reduction
Projection-based model reduction
Low-dimensional approximation via projection
General projection framework for semi-discrete systems
Proper orthogonal decomposition
2: Compute the SVD of U
General projection framework for parametrized systems
Non-intrusive model reduction
Non-intrusive versus black-box methods
Non-intrusive model reduction via Operator Inference
1: Compute the SVD of U
Model reduction and its relationship to machine learning
Non-linear model reduction
Challenges of model reduction for non-linear systems
Discrete empirical interpolation method
1: Compute the SVD of F
Exploiting variable transformations in non-linear model reduction
Findings
3: Compute the SVD of U
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call