Abstract

AbstractData assimilation algorithms combine prior and observational information, weighted by their respective uncertainties, to obtain the most likely posterior of a dynamical system. In variational data assimilation the posterior is computed by solving a nonlinear least squares problem. Many numerical weather prediction (NWP) centers use full observation error covariance (OEC) weighting matrices, which can slow convergence of the data assimilation procedure. Previous work revealed the importance of the minimum eigenvalue of the OEC matrix for conditioning and convergence of the unpreconditioned data assimilation problem. In this article we examine the use of correlated OEC matrices in the preconditioned data assimilation problem for the first time. We consider the case where there are more state variables than observations, which is typical for applications with sparse measurements, for example, NWP and remote sensing. We find that similarly to the unpreconditioned problem, the minimum eigenvalue of the OEC matrix appears in new bounds on the condition number of the Hessian of the preconditioned objective function. Numerical experiments reveal that the condition number of the Hessian is minimized when the background and observation lengthscales are equal. This contrasts with the unpreconditioned case, where decreasing the observation error lengthscale always improves conditioning. Conjugate gradient experiments show that in this framework the condition number of the Hessian is a good proxy for convergence. Eigenvalue clustering explains cases where convergence is faster than expected.

Highlights

  • Data assimilation algorithms combine observations of a dynamical system, yi ∈ Rpi at times ti, with prior information from a model, xb ∈ RN to find xi ∈ RN, the most likely state of the system at time ti

  • We find that to the unpreconditioned problem, the minimum eigenvalue of the observation error covariance (OEC) matrix appears in new bounds on the condition number of the Hessian of the preconditioned objective function

  • We investigate the relationship between conditioning, the full spectrum of the Hessian and convergence of a linear data assimilation test problem to assess the suitability of using the condition number of the Hessian as a proxy for convergence in this setting

Read more

Summary

INTRODUCTION

Data assimilation algorithms combine observations of a dynamical system, yi ∈ Rpi at times ti, with prior information from a model, xb ∈ RN to find xi ∈ RN , the most likely state of the system at time ti. Correlated OEC matrices lead to greater information content of observations, on smaller scales.[19,21,22,23] the move from uncorrelated (diagonal) to correlated (full) covariance matrices has caused problems with the convergence of the data assimilation procedure in experiments at NWP centers.[17,24,25] Previous studies of the conditioning of the preconditioned Hessian have focused on the case of uncorrelated OEC matrices.[14,26] In this article we extend this theory to the case of correlated OEC matrices. In this article we consider the conditioning of the preconditioned variational data assimilation problem in the case of correlated OEC matrices. These experiments reveal the ratio between background and observation error correlation lengthscales strongly influences the conditioning of the Hessian, with minimum condition numbers occurring when the two lengthscales are equal.

The CVT formulation of the data assimilation problem
THEORETICAL BOUNDS ON THE HESSIAN OF THE PRECONDITIONED PROBLEM
General bounds on the condition number
NUMERICAL FRAMEWORK
Changes to the condition number of the Hessian
Convergence of a conjugate gradient algorithm
CONCLUSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call