Abstract

Importance sampling algorithms are discussed in detail, with an emphasis on implicit sampling, and applied to data assimilation via particle filters. Implicit sampling makes it possible to use the data to find high-probability samples at relatively low cost, making the assimilation more efficient. A new analysis of the feasibility of data assimilation is presented, showing in detail why feasibility depends on the Frobenius norm of the covariance matrix of the noise and not on the number of variables. A discussion of the convergence of particular particle filters follows. A major open problem in numerical data assimilation is the determination of appropriate priors; a progress report on recent work on this problem is given. The analysis highlights the need for a careful attention both to the data and to the physics in data assimilation problems.

Highlights

  • Bayesian methods for estimating parameters and states in complex systems are widely used in science and engineering; they combine a prior distribution of the quantities of interest, often generated by computation, with data from observations, to produce a posterior distribution from which reliable inferences can be made

  • We present implicit sampling methods for calculating a posterior distribution of the unknowns of interest, given a prior distribution and a distribution of the observation errors, first in a parameter estimation problem, in a data assimilation problem where the prior is generated by solving stochastic differential equations with a given noise

  • A linear convergence analysis for data assimilation methods shows that Monte Carlo methods converge for many physically meaningful data assimilation problems, provided that the numerical analysis is appropriate and that the size of the noise is small enough in the appropriate norm, even when the number of variables is very large

Read more

Summary

Introduction

Bayesian methods for estimating parameters and states in complex systems are widely used in science and engineering; they combine a prior distribution of the quantities of interest, often generated by computation, with data from observations, to produce a posterior distribution from which reliable inferences can be made. We wish to sample the conditional pdf recursively, time step after time step, which is natural in problems where the data are obtained sequentially, and drastically reduces the computational cost To do this we use an importance function π of the form n+1 π(x0:n+1|b1:n+1) = π0(x0) πk(xk|x0:k−1, b1:k). The number 1 on the right-hand-side stands in for a sharper estimate of the acceptable variance of the weights for a given problem (which depends on the computing resources) In both cases, the added condition is quadratic and homogeneous in the ratio q/r, and slices out conical regions from the region where data assimilation is feasible in principle. The analysis of the EnKF relates to the situation where it is used to sample

Data assimilation infeasible
10. Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.