Abstract

We give efficient algorithms for volume sampling, i.e., for picking $k$-subsets of the rows of any given matrix with probabilities proportional to the squared volumes of the simplices defined by them and the origin (or the squared volumes of the parallelepipeds defined by these subsets of rows). %In other words, we can efficiently sample $k$-subsets of $[m]$ with probabilities proportional to the corresponding $k$ by $k$ principal minors of any given $m$ by $m$ positive semi definite matrix. This solves an open problem from the monograph on spectral algorithms by Kannan and Vempala (see Section $7.4$ of \cite{KV}, also implicit in \cite{BDM, DRVW}). Our first algorithm for volume sampling $k$-subsets of rows from an $m$-by-$n$ matrix runs in $O(kmn^\omega \log n)$ arithmetic operations (where $\omega$ is the exponent of matrix multiplication) and a second variant of it for $(1+\eps)$-approximate volume sampling runs in $O(mn \log m \cdot k^{2}/\eps^{2} + m \log^{\omega} m \cdot k^{2\omega+1}/\eps^{2\omega} \cdot \log(k \eps^{-1} \log m))$ arithmetic operations, which is almost linear in the size of the input (i.e., the number of entries) for small $k$. Our efficient volume sampling algorithms imply the following results for low-rank matrix approximation: (1) Given $A \in \reals^{m \times n}$, in $O(kmn^{\omega} \log n)$ arithmetic operations we can find $k$ of its rows such that projecting onto their span gives a $\sqrt{k+1}$-approximation to the matrix of rank $k$ closest to $A$ under the Frobenius norm. This improves the $O(k \sqrt{\log k})$-approximation of Boutsidis, Drineas and Mahoney \cite{BDM} and matches the lower bound shown in \cite{DRVW}. The method of conditional expectations gives a \emph{deterministic} algorithm with the same complexity. The running time can be improved to $O(mn \log m \cdot k^{2}/\eps^{2} + m \log^{\omega} m \cdot k^{2\omega+1}/\eps^{2\omega} \cdot \log(k \eps^{-1} \log m))$ at the cost of losing an extra $(1+\eps)$ in the approximation factor. (2) The same rows and projection as in the previous point give a $\sqrt{(k+1)(n-k)}$-approximation to the matrix of rank $k$ closest to $A$ under the spectral norm. In this paper, we show an almost matching lower bound of $\sqrt{n}$, even for $k=1$.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.