Abstract

ABSTRACT We consider a general class of convex optimization problems in which one seeks to minimize a strongly convex function over a closed and convex set which is by itself an optimal set of another convex problem in Banach space. Regularized forward–backward splitting method is applied to find the minimum like-norm solution of the minimization problem under investigation. We also introduce a gradient-based method, called the minimal like-norm gradient method, for solving this class of problems and establish the convergence of the sequence generated by the algorithm as well as a rate of convergence of the sequence of function values.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call