Abstract

We investigate the following regularized gradient projection algorithmxn+1=Pc(I−γn(∇f+αnI))xn,n≥0. Under some different control conditions, we prove that this gradient projection algorithm strongly converges to the minimum norm solution of the minimization problemminx∈Cf(x).

Highlights

  • Let C be a nonempty closed and convex subset of a real Hilbert space H

  • Note that 1.2 can be rewritten as x∗ − x∗ − ∇f x∗, x − x∗ ≥ 0, ∀x ∈ C. This shows that the minimization 1.1 is equivalent to the fixed point problem

  • The gradient-projection algorithm 1.5 is a powerful tool for solving constrained convex optimization problems and has well been studied in the case of constant stepsizes γn γ for all n

Read more

Summary

Introduction

Let C be a nonempty closed and convex subset of a real Hilbert space H. PC x∗ − γ ∇f x∗ x∗, 1.4 where γ > 0 is any constant and PC is the nearest point projection from H onto C By using this relationship, the gradient-projection algorithm is usually applied to solve the minimization problem 1.1. The gradient-projection algorithm 1.5 is a powerful tool for solving constrained convex optimization problems and has well been studied in the case of constant stepsizes γn γ for all n. The reader can refer to 1–9 and the references therein It is known 3 that if f has a Lipschitz continuous and strongly monotone gradient, the sequence {xn} can be strongly convergent to a minimizer of f in C. Under some different control conditions, we prove that this gradient projection algorithm strongly converges to the minimum norm solution of the minimization problem 1.1

Preliminaries
Main Result
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call