Abstract

We propose an accelerated version of the classical gradient method for unconstrained optimization problems defined on a Sobolev space H with Hilbert structure. Motivated by empirical results available in the literature demonstrating improved convergence of Sobolev gradient methods with suitably chosen weights, we develop a rigorous and constructive approach allowing one to identify the optimal gradient gk=g(λk) among gradients g(λ) parameterized by a weight function λ belonging to a finite-dimensional space of weights, which defines the inner product 〈⋅,⋅〉λ in the space H. At the kth iteration of the method, where an approximation uk∈H to the minimizer is given, an optimal weight λk is found as a solution of a nonlinear minimization problem in the space of weights R+N. The weight λk defines the optimal gradient gk equal to the projection of the Newton step hk onto a certain finite-dimensional subspace Tk, in the sense that Pk(σgk−hk)=0, where Pk is the projection operator onto Tk and σ a fixed step size. This property ensures that thus constructed gradient method attains a quadratic convergence in a certain sense for error components in Tk, in addition to the linear convergence typical of the classical gradient method. A numerical implementation of the new approach is also proposed. Computational results based on two model problems confirm the theoretically established convergence properties, demonstrating that the proposed approach outperforms the standard steepest-descent method based on Sobolev gradients and compares favorably to the Newton–Krylov method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call