1,570 publications found
Sort by
A family of inertial‐based derivative‐free projection methods with a correction step for constrained nonlinear equations and their applications

AbstractNumerous attempts have been made to develop efficient methods for solving the system of constrained nonlinear equations due to its widespread use in diverse engineering applications. In this article, we present a family of inertial‐based derivative‐free projection methods with a correction step for solving such system, in which the selection of the derivative‐free search direction is flexible. This family does not require the computation of corresponding Jacobian matrix or approximate matrix at every iteration and possess the following theoretical properties: (i) the inertial‐based corrected direction framework always automatically satisfies the sufficient descent and trust region properties without specific search directions, and is independent of any line search; (ii) the global convergence of the proposed family is proven under a weaker monotonicity condition on the mapping , without the typical monotonicity or pseudo‐monotonicity assumption; (iii) the results about convergence rate of the proposed family are established under slightly stronger assumptions. Furthermore, we propose two effective inertial‐based derivative‐free projection methods, each embedding a specific search direction into the proposed family. We present preliminary numerical experiments on certain test problems to demonstrate the effectiveness and superiority of the proposed methods in comparison with existing ones. Additionally, we utilize these methods for solving sparse signal restorations and image restorations in compressive sensing applications.

Relevant
Stage‐parallel preconditioners for implicit Runge–Kutta methods of arbitrarily high order, linear problems

AbstractFully implicit Runge–Kutta methods offer the possibility to use high order accurate time discretization to match space discretization accuracy, an issue of significant importance for many large scale problems of current interest, where we may have fine space resolution with many millions of spatial degrees of freedom and long time intervals. In this work, we consider strongly A‐stable implicit Runge–Kutta methods of arbitrary order of accuracy, based on Radau quadratures. For the arising large algebraic systems we introduce efficient preconditioners, that (1) use only real arithmetic, (2) demonstrate robustness with respect to problem and discretization parameters, and (3) allow for fully stage‐parallel solution. The preconditioners are based on the observation that the lower‐triangular part of the coefficient matrices in the Butcher tableau has larger in magnitude values, compared to the corresponding strictly upper‐triangular part. We analyze the spectrum of the corresponding preconditioned systems and illustrate their performance with numerical experiments. Even though the observation has been made some time ago, its impact on constructing stage‐parallel preconditioners has not yet been done and its systematic study constitutes the novelty of this article.

Open Access
Relevant
Impact of correlated observation errors on the conditioning of variational data assimilation problems

SummaryAn important class of nonlinear weighted least‐squares problems arises from the assimilation of observations in atmospheric and ocean models. In variational data assimilation, inverse error covariance matrices define the weighting matrices of the least‐squares problem. For observation errors, a diagonal matrix (i.e., uncorrelated errors) is often assumed for simplicity even when observation errors are suspected to be correlated. While accounting for observation‐error correlations should improve the quality of the solution, it also affects the convergence rate of the minimization algorithms used to iterate to the solution. If the minimization process is stopped before reaching full convergence, which is usually the case in operational applications, the solution may be degraded even if the observation‐error correlations are correctly accounted for. In this article, we explore the influence of the observation‐error correlation matrix () on the convergence rate of a preconditioned conjugate gradient (PCG) algorithm applied to a one‐dimensional variational data assimilation (1D‐Var) problem. We design the idealized 1D‐Var system to include two key features used in more complex systems: we use the background error covariance matrix () as a preconditioner (B‐PCG); and we use a diffusion operator to model spatial correlations in and . Analytical and numerical results with the 1D‐Var system show a strong sensitivity of the convergence rate of B‐PCG to the parameters of the diffusion‐based correlation models. Depending on the parameter choices, correlated observation errors can either speed up or slow down the convergence. In practice, a compromise may be required in the parameter specifications of and between staying close to the best available estimates on the one hand and ensuring an adequate convergence rate of the minimization algorithm on the other.

Relevant
Rank‐structured approximation of some Cauchy matrices with sublinear complexity

AbstractIn this article, we consider the rank‐structured approximation of one important type of Cauchy matrix. This approximation plays a key role in some structured matrix methods such as stable and efficient direct solvers and other algorithms for Toeplitz matrices and certain kernel matrices. Previous rank‐structured approximations (specifically hierarchically semiseparable, or HSS, approximations) for such a matrix of size cost at least complexity. Here, we show how to construct an HSS approximation with sublinear (specifically, ) complexity. The main ideas include extensive computation reuse and an analytical far‐field compression strategy. Low‐rank compression at each hierarchical level is restricted to just a single off‐diagonal block row, and a resulting basis matrix is then reused for other off‐diagonal block rows as well as off‐diagonal block columns. The relationships among the off‐diagonal blocks are rigorously analyzed. The far‐field compression uses an analytical proxy point method where we optimize the choice of some parameters so as to obtain accurate low‐rank approximations. Both the basis reuse ideas and the resulting analytical hierarchical compression scheme can be generalized to some other kernel matrices and are useful for accelerating relevant rank‐structured approximations (though not subsequent operations like matrix‐vector multiplications).

Open Access
Relevant