Abstract

In this paper we analyze a family of general random block coordinate descent methods for the minimization of ℓ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">0</sub> regularized optimization problems, i.e. The objective function is composed of a smooth convex function and the ℓ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">0</sub> regularization. Our family of methods covers particular cases such as random block coordinate gradient descent and random proximal coordinate descent methods. We analyze necessary optimality conditions for this nonconvex ℓ <sub xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">0</sub> regularized problem and devise a separation of the set of local minima into restricted classes based on approximation versions of the objective function. We provide a unified analysis of the almost sure convergence for this family of block coordinate descent algorithms and prove that, for each approximation version, the limit points are local minima from the corresponding restricted class of local minimizers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call