Abstract

Let $A:[0,1]\rightarrow\mathbb{H}_m$ (the space of Hermitian matrices) be a matrix valued function which is low rank with entries in H\"{o}lder class $\Sigma(\beta,L)$. The goal of this paper is to study statistical estimation of $A$ based on the regression model $\mathbb{E}(Y_j|\tau_j,X_j) = \langle A(\tau_j), X_j \rangle,$ where $\tau_j$ are i.i.d. uniformly distributed in $[0,1]$, $X_j$ are i.i.d. matrix completion sampling matrices, $Y_j$ are independent bounded responses. We propose an innovative nuclear norm penalized local polynomial estimator and establish an upper bound on its point-wise risk measured by Frobenius norm. Then we extend this estimator globally and prove an upper bound on its integrated risk measured by $L_2$-norm. We also propose another new estimator based on bias-reducing kernels to study the case when $A$ is not necessarily low rank and establish an upper bound on its risk measured by $L_{\infty}$-norm. We show that the obtained rates are all optimal up to some logarithmic factor in minimax sense. Finally, we propose an adaptive estimation procedure based on Lepskii's method and model selection with data splitting which is computationally efficient and can be easily implemented and parallelized.

Highlights

  • Let A : [0, 1] → Hm1 be a matrix valued function

  • The goal of this paper is to study the problem of statistical estimation of a matrix valued function A based on the regression model

  • We are interested in the case where A is low rank and its entries belong to a standard function class Σ(β, L) which is called Holder class, see Definition 1

Read more

Summary

Introduction

Let A : [0, 1] → Hm (the space of Hermitian matrices) be a matrix valued function. The goal of this paper is to study the problem of statistical estimation of a matrix valued function A based on the regression model. We study another naive kernel estimator Awhich can be used to estimate matrix valued functions which are not necessarily low rank. We prove that this procedure adaptively selects an estimator A∗ such that the integrated risk of A∗ measured by L2-norm has the following upper bound m−2. No one has ever thoroughly studied such problems from a theoretical point of view

Notations
Matrix completion and statistical learning setting
Matrix valued function
A local polynomial Lasso estimator
From localization to globalization
Bias reduction through higher order kernels
Lower bounds under matrix completion setting
Model selection
Numerical simulation
Simulation results of theoretical bounds
Proofs
Proof of Lemma 3

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.