Abstract
Admissions interviews are unreliable and have poor predictive validity, yet are the sole measures of non-cognitive skills used by most medical school admissions departments. The low reliability may be due in part to variation in conditional reliability across the rating scale. To describe an empirically derived estimate of conditional reliability and use it to improve the predictive validity of interview ratings. A set of medical school interview ratings was compared to a Monte Carlo simulated set to estimate conditional reliability controlling for range restriction, response scale bias and other artefacts. This estimate was used as a weighting function to improve the predictive validity of a second set of interview ratings for predicting non-cognitive measures (USMLE Step II residuals from Step I scores). Compared with the simulated set, both observed sets showed more reliability at low and high rating levels than at moderate levels. Raw interview scores did not predict USMLE Step II scores after controlling for Step I performance (additional r2 = 0.001, not significant). Weighting interview ratings by estimated conditional reliability improved predictive validity (additional r2 = 0.121, P < 0.01). Conditional reliability is important for understanding the psychometric properties of subjective rating scales. Weighting these measures during the admissions process would improve admissions decisions.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.