Abstract

Matrix completion algorithms fill missing entries in a large matrix given a subset of observed samples. However, how to best pre-select informative matrix entries given a sampling budget is largely unaddressed. In this paper, we propose a fast sample selection strategy for matrix completion from a graph signal processing perspective. Specifically, we first regularize the matrix reconstruction objective using a dual graph signal smoothness prior, resulting in a system of linear equations for solution. We then select appropriate samples to maximize the smallest eigenvalue $\lambda_{\min}$ of the coefficient matrix, thus maximizing the stability of the linear system. To efficiently solve this combinatorial problem, we derive a greedy sampling strategy, leveraging on Gershgorin circle theorem, that iteratively selects one sample (equivalent to shifting one Gershgorin disc) at a time corresponding to the largest magnitude entry in the first eigenvector of a modified graph Laplacian matrix. Our algorithm benefits computationally from warm start as the first eigenvectors of incremented Laplacian matrices are computed recurrently for more samples. To achieve computation scalability when sampling large matrices, we further rewrite the coefficient matrix as a sum of two separate components, each of which exhibits block-diagonal structure that we exploit for alternating block-wise sampling. Extensive experiments on both synthetic and real-world datasets show that our graph sampling algorithm substantially outperforms existing sampling schemes for matrix completion and reduces the completion error, when combined with a range of modern matrix completion algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call