Abstract

A prominent concern of scientific investigators is the presence of unobserved hidden variables in association analysis. Ignoring hidden variables often yields biased statistical results and misleading scientific conclusions. Motivated by this practical issue, this paper studies the multivariate response regression with hidden variables, Y=(Ψ∗)TX+(B∗)TZ+E, where Y∈Rm is the response vector, X∈Rp is the observable feature, Z∈RK represents the vector of unobserved hidden variables, possibly correlated with X, and E is an independent error. The number of hidden variables K is unknown and both m and p are allowed, but not required, to grow with the sample size n. Though Ψ∗ is shown to be nonidentifiable due to the presence of hidden variables, we propose to identify the projection of Ψ∗ onto the orthogonal complement of the row space of B∗, denoted by Θ∗. The quantity (Θ∗)TX measures the effect of X on Y that cannot be explained through the hidden variables, and thus Θ∗ is treated as the parameter of interest. Motivated by the identifiability proof, we propose a novel estimation algorithm for Θ∗, called HIVE, under homoscedastic errors. The first step of the algorithm estimates the best linear prediction of Y given X, in which the unknown coefficient matrix exhibits an additive decomposition of Ψ∗ and a dense matrix due to the correlation between X and Z. Under the sparsity assumption on Ψ∗, we propose to minimize a penalized least squares loss by regularizing Ψ∗ and the dense matrix via group-lasso and multivariate ridge, respectively. Nonasymptotic deviation bounds of the in-sample prediction error are established. Our second step estimates the row space of B∗ by leveraging the covariance structure of the residual vector from the first step. In the last step, we estimate Θ∗ via projecting Y onto the orthogonal complement of the estimated row space of B∗ to remove the effect of hidden variables. Nonasymptotic error bounds of our final estimator of Θ∗, which are valid for any m,p,K and n, are established. We further show that, under mild assumptions, the rate of our estimator matches the best possible rate with known B∗ and is adaptive to the unknown sparsity of Θ∗ induced by the sparsity of Ψ∗. The model identifiability, estimation algorithm and statistical guarantees are further extended to the setting with heteroscedastic errors. Thorough numerical simulations and two real data examples are provided to back up our theoretical results.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call