Abstract

There is a growing interest in probabilistic numerical solutions to ordinary differential equations. In this paper, the maximum a posteriori estimate is studied under the class of nu times differentiable linear time-invariant Gauss–Markov priors, which can be computed with an iterated extended Kalman smoother. The maximum a posteriori estimate corresponds to an optimal interpolant in the reproducing kernel Hilbert space associated with the prior, which in the present case is equivalent to a Sobolev space of smoothness nu +1. Subject to mild conditions on the vector field, convergence rates of the maximum a posteriori estimate are then obtained via methods from nonlinear analysis and scattered data approximation. These results closely resemble classical convergence results in the sense that a nu times differentiable prior process obtains a global order of nu , which is demonstrated in numerical examples.

Highlights

  • Let T = [0, T ], T < ∞, f : T × Rd → Rd, y0 ∈ Rd and consider the following ordinary differential equation (ODE): Dy(t) = f (t, y(t)), y(0) = y0, (1)where D denotes the time derivative operator

  • In applications where a numerical solution is sought as a component of a larger statistical inference problem, it is desirable that the error can be quantified with the same semantic, that is to say, probabilistically (Hennig et al 2015; Oates and Sullivan 2019)

  • Probabilistic ODE solvers can roughly be divided into two classes, sampling based solvers and deterministic solvers

Read more

Summary

Introduction

Probabilistic ODE solvers can roughly be divided into two classes, sampling based solvers and deterministic solvers The former class includes classical ODE solvers that are stochastically perturbed (Teymur et al 2016; Conrad et al 2017; Teymur et al 2018; Abdulle et al 2020; Lie et al 2019), solvers that approximately sample from a Bayesian inference problem (Tronarp et al 2019b), and solvers that perform Gaussian process regression on stochastically generated data (Chkrebtii et al 2016). It is fruitful to select the Gaussian process prior to be Markovian (Kersting and Hennig 2016; Magnani et al 2017; Schober et al 2019; Tronarp et al 2019b), as

23 Page 2 of 18
Notation
A probabilistic state-space model
The prior
The selection of prior
23 Page 4 of 18
The data model
Maximum a posteriori estimation
Inference with affine vector fields
The iterated extended Kalman Smoother
Initialisation
Computational complexity
The reproducing Kernel Hilbert space of the prior
Nonlinear Kernel interpolation
23 Page 8 of 18
Convergence analysis
Model correctness and regularity of the solution
Properties of the information operator
Convergence of the MAP estimate
Selecting the hyperparameters
Numerical examples
The logistic equation
A Riccati equation
The Fitz–Hugh–Nagumo model
23 Page 14 of 18
A Computing transition densities
B Calibration
23 Page 18 of 18
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.