Abstract

In optimization of the least absolute shrinkage and selection operator (Lasso) problem, the fastest algorithm has a convergence rate of O(1/ϵ). This polynomial order of 1/ϵ is caused by the undesirable behavior of the absolute function at the origin. To expedite the convergence, an algorithm called homotopy shrinkage yielding (HOSKY) is proposed. It helps expedite the warm-up stage of the existing algorithms. With the acceleration by HOSKY in the warm-up stage, one can get a provable convergence rate lower than O(1/ϵ). The main idea of the proposed HOSKY algorithm is to use a sequence of surrogate functions to approximate the ℓ1 penalty that is used in Lasso. This sequence of surrogate functions, on the one hand, gets closer and closer to the ℓ1 penalty; on the other hand, they are strictly convex and well-conditioned, which enables a provable exponential rate of convergence by gradient-based approaches. As the proof shows, the convergence rate of the HOSKY algorithm is O([log⁡(1/ϵw)]2), where ϵw is the precision used in the warm-up stage (ϵw↛0). Additionally, the numerical simulations also show that HOSKY empirically performs better in the warm-up stage and accelerates the overall convergence rate.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call