Abstract
Many machine learning problems can be reduced to learning a low-rank positive semidefinite matrix (denoted as Z), which encounters semidefinite program (SDP). Existing SDP solvers are often expensive for large-scale learning. In order to avoid directly solving SDP, some works convert SDP into a nonconvex program by factorizing Z as XX⊤. However, this would bring higher-order nonlinearity, resulting in scarcity of structure in subsequent optimization. In this paper, we propose a novel surrogate for SDP-based learning, in which the structure of subproblem is exploited. More specifically, we surrogate unconstrained SDP by a biconvex problem, through factorizing Z as XY⊤ and using a quadratic penalty to penalize the difference of X and Y, in which the resultant subproblems are convex. Furthermore, we provide a theory bound for the associated penalty parameter under the assumption that the objective function is Lipschitz-smooth, such that the proposed surrogate will solve the original SDP when the penalty parameter is larger than this bound. Experiments on three SDP-based machine learning problems demonstrate that the proposed algorithm is as accurate as the state-of-the-art, but is faster on large-scale learning.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.