Abstract

Estimating a low rank matrix from its linear measurements is a problem of central importance in contemporary statistical analysis. The choice of tuning parameters for estimators remains an important challenge from a theoretical and practical perspective. To this end, Stein’s Unbiased Risk Estimate (SURE) framework provides a well-grounded statistical framework for degrees of freedom estimation. In this paper, we use the SURE framework to obtain degrees of freedom estimates for a general class of spectral regularized matrix estimators—our results generalize beyond the class of estimators that have been studied thus far. To this end, we use a result due to Shapiro (2002) pertaining to the differentiability of symmetric matrix valued functions, developed in the context of semidefinite optimization algorithms. We rigorously verify the applicability of Stein’s Lemma towards the derivation of degrees of freedom estimates; and also present new techniques based on Gaussian convolution to estimate the degrees of freedom of a class of spectral estimators, for which Stein’s Lemma does not directly apply.

Highlights

  • Consider the basic sequence model setup with y = μ +, Cov( ) = τ 2I, E( ) = 0, where, I is the identity matrix; and we observe y ∈ Rn, a noisy version of the unknown signal μ ∈ Rn

  • In this paper, we aim to present a systematic study of two generic low rank matrix estimators, namely spectral regularized and rank constrained estimators—this includes, but is not limited to, all estimators studied in the three aforementioned works

  • Our contributions are summarized as: (i) We propose a framework to derive the analytic formula of ij ∂μij/∂yij for general matrix estimators, by appealing to some nice results pertaining to differentiability of symmetric matrix valued functions due to Shapiro [28]—these results were derived in the context of semidefinite optimization algorithms

Read more

Summary

Introduction

[29, 9] derive an alternate expression of df for the Gaussian sequence model y ∼ N (μ, τ 2I) when μis weakly differentiable with respect to y In this case, the degrees of freedom of μis given by the well-known Stein’s Lemma:. There has been nice recent work on using SURE theory to derive the df of low rank matrix estimators – but the problem becomes quite challenging as one needs to deal with the differentiability properties of nonlinear functions of the spectrum and singular vectors of a matrix. All of our proofs and related technical material are relegated to the appendix

Notations
Computing the divergence of matrix valued spectral functions
Estimators obtained via spectral regularization
Reduced rank estimators
Verifying the regularity conditions
Estimating df via smoothing with convolution operators
Degrees of freedom in multivariate linear regression
Reduced rank regression estimators
Spectral regularized regression estimators
Additive Gaussian model
Multivariate linear regression
Conclusion
Proof of Corollary 1
A useful lemma
Proof of Corollary 2
Proof of Corollary 3
Proof of Corollary 4
Proof of Corollary 5
Proof of Corollary 6
Stein’s unbiased risk estimate
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call