Abstract

AbstractIn a psychometric analysis of a new psychological test, we often assess the predictive validity of a new target test over and above a baseline test, known as the incremental predictive validity. Usually, the incremental predictive validity is evaluated using within-sample statistics. Recently, it was argued to use out-of-sample assessment to prevent overfitting and non-replicable findings. In this paper, we elaborate on how to assess incremental predictive validity out-of-sample. In such an approach, we estimate prediction rules in one sample, and evaluate incremental predictive validity in another sample. Using a simulation study, we investigate whether an out-of-sample assessment results in different findings than a within-sample evaluation, taking into account the reliability of the baseline and a target test, and other factors (i.e., sample size). Results show that there is a difference between the in-sample and out-of-sample assessment, especially in small samples. However, the reliability of the two tests has no influence on this difference. In addition, we explore the effects of ridge estimation, ordinary least squares, and SIMEX, three different methods for estimating a prediction rule, on incremental predictive validity. The results show that using SIMEX leads to a bad assessment of incremental predictive validity. Ordinary least squares and ridge estimation result in almost the same incremental predictive validity estimates with a little advantage for ridge regression. In an empirical application, we show how to assess incremental predictive validity in practice and we compare that to the usual assessment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call