Abstract

We extend the theory of low-rank matrix recovery and completion to the case when Poisson observations for a linear combination or a subset of the entries of a matrix are available, which arises in various applications with count data. We consider the usual matrix recovery formulation through maximum likelihood with proper constraints on the matrix $M$ of size $d_1$-by-$d_2$, and establish theoretical upper and lower bounds on the recovery error. Our bounds for matrix completion are nearly optimal up to a factor on the order of $\mathcal{O}(\log(d_1 d_2))$. These bounds are obtained by combing techniques for compressed sensing for sparse vectors with Poisson noise and for analyzing low-rank matrices, as well as adapting the arguments used for one-bit matrix completion \cite{davenport20121} (although these two problems are different in nature) and the adaptation requires new techniques exploiting properties of the Poisson likelihood function and tackling the difficulties posed by the locally sub-Gaussian characteristic of the Poisson distribution. Our results highlight a few important distinctions of the Poisson case compared to the prior work including having to impose a minimum signal-to-noise requirement on each observed entry and a gap in the upper and lower bounds. We also develop a set of efficient iterative algorithms and demonstrate their good performance on synthetic examples and real data.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.