Abstract

Finding a small spectral approximation for a tall n x d matrix A is a fundamental numerical primitive. For a number of reasons, one often seeks an approximation whose rows are sampled from those of A. Row sampling improves interpretability, saves space when A is sparse, and preserves row structure, which is especially important, for example, when A represents a graph. However, correctly sampling rows from A can be costly when the matrix is large and cannot be stored and processed in memory. Hence, a number of recent publications focus on row sampling in the streaming setting, using little more space than what is required to store the outputted approximation [Kelner Levin 2013] [Kapralov et al. 2014]. Inspired by a growing body of work on online algorithms for machine learning and data analysis, we extend this work to a more restrictive online setting: we read rows of A one by one and immediately decide whether each row should be kept in the spectral approximation or discarded, without ever retracting these decisions. We present an extremely simple algorithm that approximates A up to multiplicative error epsilon and additive error delta using O(d log d log (epsilon ||A||_2^2/delta) / epsilon^2) online samples, with memory overhead proportional to the cost of storing the spectral approximation. We also present an algorithm that uses O(d^2) memory but only requires O(d log (epsilon ||A||_2^2/delta) / epsilon^2) samples, which we show is optimal. Our methods are clean and intuitive, allow for lower memory usage than prior work, and expose new theoretical properties of leverage score based matrix approximation.

Highlights

  • 1.1 BackgroundA spectral approximation to a tall n × d matrix A is a smaller, typically O (d) × d matrix Asuch thatAx 2 ≈ Ax 2 for all x

  • Inspired by a growing body of work on online algorithms for machine learning and data analysis, we extend this work to a more restrictive online setting: we read rows of A one by one and immediately decide whether each row should be kept in the spectral approximation or discarded, without ever retracting these decisions

  • It is well known that sampling O(d log d/ε2) rows of A with probabilities proportional to their leverage scores yields a (1 ± ε)-factor spectral approximation to A. This sampling can be done in input sparsity time, either using subspace embeddings to approximate leverage scores, or using iterative sampling techniques [20], some that only work with subsampled versions of the original matrix [11]

Read more

Summary

Background

A spectral approximation to a tall n × d matrix A is a smaller, typically O (d) × d matrix Asuch that. E., running time scaling linearly in the number of nonzero entries in A. These methods produce Aby randomly recombining the rows of A into a smaller number of rows. It is well known that sampling O(d log d/ε2) rows of A with probabilities proportional to their leverage scores yields a (1 ± ε)-factor spectral approximation to A. This sampling can be done in input sparsity time, either using subspace embeddings to approximate leverage scores, or using iterative sampling techniques [20], some that only work with subsampled versions of the original matrix [11]

Streaming and online row sampling
Our results
Overview
Analysis of sampling schemes
Asymptotically optimal algorithm
Matching lower bound
Future work

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.