Abstract

In this paper, we study high-dimensional reduced rank regression and propose a doubly robust procedure, called D4R, meaning concurrent robustness to both outliers in predictors and heavy-tailed random noise. The proposed method uses the composite gradient descent based algorithm to solve the nonconvex optimization problem resulting from combining Tukey’s biweight loss with spectral regularization. Both theoretical and numerical properties of D4R are investigated. We establish non-asymptotic estimation error bounds under both the Frobenius norm and the nuclear norm in the high-dimensional setting. Simulation studies and real example show that the performance of D4R is better than that of several existing estimation methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call