Abstract

<p> We introduce a new formulation of the 4DVAR objective function by using as a penalty term a p-norm with 1 < p < 2. So far, only the 2-norm, the 1-norm or a mixed of both have been considered as regularization term. This approach is motivated by the nature of the problems encountered in data assimilation, for which such a norm may be more suited to tackle the distribution of the variables. It also aims at making a compromise between the 2-norm that tends to oversmooth the solution or produce Gibbs oscillations, and the 1-norm that tends to "oversparcify" it, in addition to making the problem non-smooth.</p><p>The performance of the proposed technique are assessed for different p-values by twin experiments on a linear advection equation. The experiments are then conducted using two different true states in order to assess the performances of the p-norm regularized 4DVAR algorithm in sparse (rectangular function) and "almost" sparse cases (rectangular function with a smoother slope). In this setup, the background and the measurements noise covariance are known.</p><p>In order to minimize the 4DVAR objective function with a p-norm as a regularization term we use a gradient descent algorithm that requires the use of duality operators to work on a non-euclidean space. Indeed, Rn together with the p-norm (1 < p < 2) is a Banach space. Finally, to tune the regularization parameter appearing in the formulation of the objective function, we use the Morozov's discrepancy principle.</p>

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call