Abstract

Decentralized low-rank learning is an active research domain with extensive practical applications. A common approach to producing low-rank and robust estimations is to employ a combination of the nonsmooth quantile regression loss and nuclear-norm regularizer. Nevertheless, directly applying existing techniques may result in slow convergence rates due to the doubly nonsmooth objective. To expedite the computation process, a decentralized surrogate matrix quantile regression method is proposed in this paper. The proposed algorithm has a simple implementation and can provably converge at a linear rate. Additionally, we provide a statistical guarantee that our estimate can achieve an almost optimal convergence rate, regardless of the number of nodes. Numerical simulations confirm the efficacy of our approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call