Abstract

This paper studies quantile regression with non-convex and non-smooth sparse-penalties, such as minimax concave penalty (MCP) and smoothly clipped absolute deviation (SCAD). Although iterative coordinate descent and local linear approximation techniques can solve quantile regression problem, convergence is slow for MCP and SCAD penalties. However, alternating direction method of multipliers (ADMM) can be exploited to enhance the convergence speed. Hence, this paper proposes a new ADMM algorithm with an increasing penalty parameter, called IAD, to handle sparse-penalized quantile re-gression. We first investigate the convergence of the proposed algorithm and establish the conditions for convergence. Then, we present numerical results to demonstrate the efficacy of the proposed algorithm. Our results show that the proposed IAD algorithm can handle sparse-penalized quantile regression more effectively than the state-of-the-art methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call