Abstract
In recent years, most of the studies have shown that the generalized iterated shrinkage thresholdings (GISTs) have become the commonly used first-order optimization algorithms in sparse learning problems. The nonconvex relaxations of the l0 -norm usually achieve better performance than the convex case (e.g., l1 -norm) since the former can achieve a nearly unbiased solver. To increase the calculation efficiency, this work further provides an accelerated GIST version, that is, AGIST, through the extrapolation-based acceleration technique, which can contribute to reduce the number of iterations when solving a family of nonconvex sparse learning problems. Besides, we present the algorithmic analysis, including both local and global convergence guarantees, as well as other intermediate results for the GIST and AGIST, denoted as (A)GIST, by virtue of the Kurdyka-Łojasiewica (KŁ) property and some milder assumptions. Numerical experiments on both synthetic data and real-world databases can demonstrate that the convergence results of objective function accord to the theoretical properties and nonconvex sparse learning methods can achieve superior performance over some convex ones.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.