Abstract

In this paper, we define a class of cross-validatory model selection criteria as an estimator of the predictive risk function based on a discrepancy between a candidate model and the true model. For a vector of unknown parameters, $n$ estimators are required for the definition of the class, where $n$ is the sample size. The $i$th estimator $(i=1,\dots,n)$ is obtained by minimizing a weighted discrepancy function in which the $i$th observation has a weight of $1-\lambda$ and others have weight of $1$. Cross-validatory model selection criteria in the class are specified by the individual $\lambda$. The sample discrepancy function and the ordinary cross-validation (CV) criterion are special cases of the class. One may choose $\lambda$ to minimize the biases. The optimal $\lambda$ makes the bias-corrected CV (CCV) criterion a second-order unbiased estimator for the risk function, while the ordinary CV criterion is a first-order unbiased estimator of the risk function.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.