Abstract

We consider the high-dimensional linear regression model $Y=X\beta^{0}+\epsilon$ with Gaussian noise $\epsilon$ and Gaussian random design $X$. We assume that $\Sigma:=\mathrm{I\hskip-0.48emE}X^{T}X/n$ is non-singular and write its inverse as $\Theta :=\Sigma^{-1}$. The parameter of interest is the first component $\beta_{1}^{0}$ of $\beta^{0}$. We show that in the high-dimensional case the asymptotic variance of a debiased Lasso estimator can be smaller than $\Theta_{1,1}$. For some special such cases we establish asymptotic efficiency. The conditions include $\beta^{0}$ being sparse and the first column $\Theta_{1}$ of $\Theta$ being not sparse. These sparsity conditions depend on whether $\Sigma$ is known or not.

Highlights

  • Let Y be an n-vector of observations and X ∈ Rn×p an input matrix

  • We show that in the high-dimensional case the asymptotic variance of a debiased Lasso estimator can be smaller than Θ1,1

  • Using the node-wise Lasso, we find that one may again profit from non-sparsity of the vector Θ1

Read more

Summary

Introduction

Let Y be an n-vector of observations and X ∈ Rn×p an input matrix. The linear model is. The paper shows that the asymptotic variance of the debiased estimator can be smaller than the “usual” value for the low-dimensional case. The paper [11] does not require sparsity of Θ1 when Σ is known and it turns out that for certain non-sparse vectors Θ1 their estimator is not asymptotically efficient, for example under the model (1.2) with s = o(n/ log p) and with a matrix Σ of a certain form (see Theorem 2.1 or Remark 2.6 following this theorem). Which sparsity variant is needed to establish asymptotic normality of the debiased Lasso (1.5) depends to a large extent on whether Σ is known or not. One can show asymptotic linearity of the debias√ed Lasso under model (1.1) with sparsity variant (ii) and in addition Θ1 0 = o( n/ log p). For the case Σ unknown, model (1.2) is too large

The asymptotic Cramer Rao lower bound
Notations and definitions
Organization of the rest of the paper
The case of Σ known
Finding eligible pairs
Using the Lasso
Using projections
Approximate projections
Reverse engineering
Regression: γ0 as least squares estimate of γ
Creating γ0 directly
Creating γ0 using a non-sparsity restriction
The case of Σ unknown
Conclusion
Proof for Section 1
Proofs for Section 2
Proofs for Section 3
Proofs for Section 4
Probability inequalities
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.