Abstract

The sample covariance matrix of a random vector is a good estimate of the true covariance matrix if the sample size is much larger than the length of the vector. In high-dimensional problems, this condition is never met. As a result, in high dimensions the Ensemble Kalman Filter’s (EnKF) ensemble does not contain enough information to specify the prior covariance matrix accurately. This necessitates the need for regularization of the analysis (observation update) problem. We propose a regularization technique based on a new spatial model. The model is a constrained version of the general Gaussian process convolution model. The constraints include local stationarity and smoothness of local spectra. We regularize EnKF by postulating that its prior covariances obey the spatial model. Placing a hyperprior distribution on the model parameters and using the likelihood of the prior ensemble data allows for an optimized use of both the ensemble and the hyperprior. A linear version of the respective estimator is shown to be consistent. A more accurate nonlinear neural-Bayes implementation of the estimator is developed. In simulation experiments, the new technique led to substantially better EnKF performance than several existing techniques.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.