Abstract

Iteratively reweighted $$\ell _1$$ algorithm is a popular algorithm for solving a large class of optimization problems whose objective is the sum of a Lipschitz differentiable loss function and a possibly nonconvex sparsity inducing regularizer. In this paper, motivated by the success of extrapolation techniques in accelerating first-order methods, we study how widely used extrapolation techniques such as those in Auslender and Teboulle (SIAM J Optim 16:697–725, 2006), Beck and Teboulle (SIAM J Imaging Sci 2:183–202, 2009), Lan et al. (Math Program 126:1–29, 2011) and Nesterov (Math Program 140:125–161, 2013) can be incorporated to possibly accelerate the iteratively reweighted $$\ell _1$$ algorithm. We consider three versions of such algorithms. For each version, we exhibit an explicitly checkable condition on the extrapolation parameters so that the sequence generated provably clusters at a stationary point of the optimization problem. We also investigate global convergence under additional Kurdyka–Łojasiewicz assumptions on certain potential functions. Our numerical experiments show that our algorithms usually outperform the general iterative shrinkage and thresholding algorithm in Gong et al. (Proc Int Conf Mach Learn 28:37–45, 2013) and an adaptation of the iteratively reweighted $$\ell _1$$ algorithm in Lu (Math Program 147:277–307, 2014, Algorithm 7) with nonmonotone line-search for solving random instances of log penalty regularized least squares problems in terms of both CPU time and solution quality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call