Abstract

We explore the extension of James-Stein type estimators in a direction that enables them to preserve their superiority when the sample size goes to infinity. Instead of shrinking a base estimator towards a fixed point, we shrink it towards a data-dependent point. We provide an analytic expression for the asymptotic risk and bias of James-Stein type estimators shrunk towards a data-dependent point and prove that they have smaller asymptotic risk than the base estimator. Shrinking an estimator toward a data-dependent point turns out to be equivalent to combining two random variables using the James-Stein rule. We propose a general combination scheme which includes random combination (the James-Stein combination) and the usual nonrandom combination as special cases. As an example, we apply our method to combine the Least Absolute Deviations estimator and the Least Squares estimator. Our simulation study indicates that the resulting combination estimators have desirable finite sample properties when errors are drawn from symmetric distributions. Finally, using stock return data we present some empirical evidence that the combination estimators have the potential to improve out-of-sample prediction in terms of both mean square error and mean absolute error. Key Words: Shrinkage; Asymptotic risk; Combination estimator

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.