Abstract

Smooth convex minimization over the unit trace-norm ball is an important optimization problem in machine learning, signal processing, statistics, and other fields that underlies many tasks in which one wishes to recover a low-rank matrix given certain measurements. While first-order methods for convex optimization enjoy optimal convergence rates, they require in the worst-case to compute a full-rank SVD on each iteration, in order to compute the Euclidean projection onto the trace-norm ball. These full-rank SVD computations, however, prohibit the application of such methods to large-scale problems. A simple and natural heuristic to reduce the computational cost of such methods is to approximate the Euclidean projection using only a low-rank SVD. This raises the question if, and under what conditions, this simple heuristic can indeed result in provable convergence to the optimal solution. In this paper we show that any optimal solution is a center of a Euclidean ball inside which the projected-gradient mapping admits a rank that is at most the multiplicity of the largest singular value of the gradient vector at this optimal point. Moreover, the radius of the ball scales with the spectral gap of this gradient vector. We show how this readily implies the local convergence (i.e., from a “warm-start" initialization) of standard first-order methods such as the projected-gradient method and accelerated gradient methods, using only low-rank SVD computations. We also quantify the effect of “over-parameterization," i.e., using SVD computations with higher rank, on the radius of this ball, showing it can increase dramatically with moderately larger rank. We extend our results also to the setting of smooth convex minimization with trace-norm regularization and smooth convex optimization over bounded-trace positive semidefinite matrices. Our theoretical investigation is supported by concrete empirical evidence that demonstrates the correct convergence of first-order methods with low-rank projections for the matrix completion task on real-world datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call