Abstract

Random projection (RP) is a classical technique for reducing storage and computational costs. We analyze RP-based approximations of convex programs, in which the original optimization problem is approximated by solving a lower dimensional problem. Such dimensionality reduction is essential in computation-limited settings, since the complexity of general convex programming can be quite high (e.g., cubic for quadratic programs, and substantially higher for semidefinite programs). In addition to computational savings, RP is also useful for reducing memory usage, and has useful properties for privacy-preserving optimization. We prove that the approximation ratio of this procedure can be bounded in terms of the geometry of the constraint set. For a broad class of RPs, including those based on various sub-Gaussian distributions as well as randomized Hadamard and Fourier transforms, the data matrix defining the cost function can be projected to a dimension proportional to the squared Gaussian width of the tangent cone of the constraint set at the original solution. This effective dimension of the convex program is often substantially smaller than the original dimension. We illustrate consequences of our theory for various cases, including unconstrained and $\ell _{1}$ -constrained least squares, support vector machines, low-rank matrix estimation, and discuss implications for privacy-preserving optimization, as well as connections with denoising and compressed sensing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call