Abstract
This paper compares the performance characteristics of penalty estimators, namely, LASSO and ridge regression (RR), with the least squares estimator (LSE), restricted estimator (RE), preliminary test estimator (PTE) and the Stein-type estimators. Under the assumption of orthonormal design matrix of a given regression model, we find that the RR estimator dominates the LSE, RE, PTE, Stein-type estimators and LASSO estimator uniformly, while, similar to Hansen (2013), neither LASSO nor LSE, PTE and Stein-Type estimators dominates the other. Our conclusions are based on the analysis of L_2-risks and relative risk efficiencies (RRE) together with the RRE related tables and graphs.
Highlights
It is well-known that the “least squares estimators (LSE)” in linear models, are unbiased with minimum variance characteristics
This paper is devoted to the comparative study of the finite sample performance of the primary penalty estimators, namely, least absolute shrinkage and selection operator (LASSO) and the ridge regression estimators relative to the LSE, restricted estimator (RE), preliminary test (PTE), James-Stein estimator (JSE) and positive rule estimator (PRE)
We consider the LASSO (Least absolute shrinkage and selection operator) estimator due to [32] which has gone viral in the statistical literature due to its applicability in data-analysis for linear models unlike other estimators
Summary
It is well-known that the “least squares estimators (LSE)” in linear models, are unbiased with minimum variance characteristics. There are many shrinkage estimators such as “preliminary test (PT)” and Stein-type estimators (SE) in the literature They do not select coefficients but only shrinks towards a target value. This paper is devoted to the comparative study of the finite sample performance of the primary penalty estimators, namely, LASSO and the ridge regression estimators relative to the LSE, RE, preliminary test (PTE), James-Stein estimator (JSE) and positive rule estimator (PRE). An important characteristic of LASSO is that it provides simultaneous estimation and selection of coefficients in a linear models and can be applied when the dimension of the parameters space exceeds the dimension of the sample space, while the Stein-type estimation restricts the dimension of the parameters space below the dimension of the sample space.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Statistics, Optimization & Information Computing
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.