Abstract The class of $\ell _q$-regularized least squares (LQLS) are considered for estimating $\beta \in \mathbb{R}^p$ from its $n$ noisy linear observations $y=X\beta + w$. The performance of these schemes are studied under the high-dimensional asymptotic setting in which the dimension of the signal grows linearly with the number of measurements. In this asymptotic setting, phase transition (PT) diagrams are often used for comparing the performance of different estimators. PT specifies the minimum number of observations required by a certain estimator to recover a structured signal, e.g. a sparse one, from its noiseless linear observations. Although PT analysis is shown to provide useful information for compressed sensing, the fact that it ignores the measurement noise not only limits its applicability in many application areas, but also may lead to misunderstandings. For instance, consider a linear regression problem in which $n>p$ and the signal is not exactly sparse. If the measurement noise is ignored in such systems, regularization techniques, such as LQLS, seem to be irrelevant since even the ordinary least squares (OLS) returns the exact solution. However, it is well known that if $n$ is not much larger than $p$, then the regularization techniques improve the performance of OLS. In response to this limitation of PT analysis, we consider the low-noise sensitivity analysis. We show that this analysis framework (i) reveals the advantage of LQLS over OLS, (ii) captures the difference between different LQLS estimators even when $n>p$, and (iii) provides a fair comparison among different estimators in high signal-to-noise ratios. As an application of this framework, we will show that under mild conditions LASSO outperforms other LQLS even when the signal is dense. Finally, by a simple transformation, we connect our low-noise sensitivity framework to the classical asymptotic regime in which $n/p \rightarrow \infty$, and characterize how and when regularization techniques offer improvements over ordinary least squares, and which regularizer gives the most improvement when the sample size is large.
Read full abstract