Abstract

ABSTRACT In this paper, a family of coordinate majorization descent algorithms are proposed for solving the nonconvex penalized learning problems including SCAD and MCP estimation. In the coordinate majorization descent algorithms, each coordinate descent step is replaced with a coordinate-wise majorization descent operation, and the convergence of the algorithms are discussed in linear models. In addition, we apply the algorithms to the Logisitic models. Our simulation study and data examples indicate that the coordinate majorization descent algorithms can select the real model with a higher probability and the model is sparse, also the algorithms improve the accuracy of the parameter estimation with SCAD and MCP penalties.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call