Abstract
The ℓ p -norm regression problem is a classic problem in optimization with wide ranging applications in machine learning and theoretical computer science. The goal is to compute \(\boldsymbol {\mathit {x}}^{\star } =\arg \min _{\boldsymbol {\mathit {A}}\boldsymbol {\mathit {x}}=\boldsymbol {\mathit {b}}}\Vert \boldsymbol {\mathit {x}}\Vert _p^p \) , where \(\boldsymbol {\mathit {x}}^{\star }\in \mathbb {R}^n,\boldsymbol {\mathit {A}}\in \mathbb {R}^{d\times n},\boldsymbol {\mathit {b}} \in \mathbb {R}^d \) and d ≤ n . Efficient high-accuracy algorithms for the problem have been challenging both in theory and practice and the state-of-the-art algorithms require \(poly(p)\cdot n^{\frac{1}{2}-\frac{1}{p}} \) linear system solves for p ≥ 2. In this paper, we provide new algorithms for ℓ p -regression (and a more general formulation of the problem) that obtain a high-accuracy solution in O ( pn ( p − 2)/(3 p − 2) ) linear system solves. We further propose a new inverse maintenance procedure that speeds-up our algorithm to \(\widetilde{O}(n^{\omega }) \) total runtime, where O ( n ω ) denotes the running time for multiplying n × n matrices. Additionally, we give the first Iteratively Reweighted Least Squares (IRLS) algorithm that is guaranteed to converge to an optimum in a few iterations. Our IRLS algorithm has shown exceptional practical performance, beating the currently available implementations in MATLAB/CVX by 10-50x.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.