Abstract

In this paper, we propose an accelerated proximal gradient method with line search for solving the large-scale nonconvex penalty problems. Compared with the classic proximal gradient method, the new method does not need to evaluate the Lipschitz constant, parameters are initialized by some aggressive values and updated adaptively at each iteration. Under some common assumptions, the global subsequential convergence is established. Moreover, some numerical results on large-scale SCAD and MCP nonconvex penalty problems are reported to indicate the efficiency and superiority of the proposed method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call