Abstract

We introduce four accelerated (sub)gradient algorithms (ASGA) for solving several classes of convex optimization problems. More specifically, we propose two estimation sequences majorizing the objective function and develop two iterative schemes for each of them. In both cases, the first scheme requires the smoothness parameter and a Holder constant, while the second scheme is parameter-free (except for the strong convexity parameter which we set zero if it is not available) at the price of applying a finitely terminated backtracking line search. The proposed algorithms attain the optimal complexity for smooth problems with Lipschitz continuous gradients, nonsmooth problems with bounded variation of subgradients, and weakly smooth problems with Holder continuous gradients. Further, for strongly convex problems, they are optimal for smooth problems while nearly optimal for nonsmooth and weakly smooth problems. Finally, numerical results for some applications in sparse optimization and machine learning are reported, which confirm the theoretical foundations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call