Abstract

The task of estimating a matrix given a sample of observed entries is known as the \emph{matrix completion problem}. Most works on matrix completion have focused on recovering an unknown real-valued low-rank matrix from a random sample of its entries. Here, we investigate the case of highly quantized observations when the measurements can take only a small number of values. These quantized outputs are generated according to a probability distribution parametrized by the unknown matrix of interest. This model corresponds, for example, to ratings in recommender systems or labels in multi-class classification. We consider a general, non-uniform, sampling scheme and give theoretical guarantees on the performance of a constrained, nuclear norm penalized maximum likelihood estimator. One important advantage of this estimator is that it does not require knowledge of the rank or an upper bound on the nuclear norm of the unknown matrix and, thus, it is adaptive. We provide lower bounds showing that our estimator is minimax optimal. An efficient algorithm based on lifted coordinate gradient descent is proposed to compute the estimator. A limited Monte-Carlo experiment, using both simulated and real data is provided to support our claims.

Highlights

  • The matrix completion problem arises in a wide range of applications such as image processing [14, 15, 27], quantum state tomography [12], seismic data reconstruction [28] or recommender systems [20, 2]

  • It consists in recovering all the entries of an unknown matrix, based on partial, random and, possibly, noisy observations of its entries

  • The matrix completion problem can be solved provided that the unknown matrix is low rank, either exactly or approximately; see [6, 16, 19, 24, 4, 18] and the references therein

Read more

Summary

Introduction

The matrix completion problem arises in a wide range of applications such as image processing [14, 15, 27], quantum state tomography [12], seismic data reconstruction [28] or recommender systems [20, 2]. The entries are assumed to be real valued and observed in presence of additive, homoscedastic Gaussian or sub-Gaussian noise In this framework, the matrix completion problem can be solved provided that the unknown matrix is low rank, either exactly or approximately; see [6, 16, 19, 24, 4, 18] and the references therein. One-bit matrix completion was further considered by [5] where a max-norm constrained maximum likelihood estimate is considered This method allows more general non-uniform sampling schemes but still requires an upper bound on the max-norm of the unknown matrix. For any tensor X ∈ Rm1×m2×q we define rk(X ) := maxl∈[q] rk(Xl), where rk(Xl) is the rank of the matrix Xl and its sup-norm by X ∞ := maxl∈[q] Xl ∞

One-bit matrix completion
Minimax lower bounds for one-bit matrix completion
Extension to multi-class problems
Implementation
Numerical experiments
Proof of Theorem 1 and Theorem 4
Proof of Theorem 5
Findings
Proof of Theorem 3

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.