Abstract

Despite the amount of published research on the predictive validity of the Medical College Admissions Test (MCAT) taken as a whole, few published reports separate the individual predictive validity of the Writing Sample. The purpose of this study is to provide data on the predictive validity of the Writing Sample for the national licensing exam used in the United States. Subjects consisted of 1992-1995 matriculants from a publicly owned medical school in the Southeastern United States. Independent variables were undergraduate grade point average, and four section scores on a required admissions test (Biological Sciences, Physical Sciences, Verbal Reasoning, and Writing Sample). The dependent variables were Steps 1 and 2 on the three step licensing examination. Steps 1 and 2 of the examination are taken during medical school. Multiple regression models calculated additional variance accounted for by the addition of the Writing Sample to a model containing grade point average and the other admissions test section scores. In multivariate analyses, when grade point average and all admissions test scores were considered as predictors of licensing exam scores, the Writing Sample variable did not add to the ability to predicting the Step 1 or Step 2 scores. The results of this study suggest that the Writing Sample has limited predictive validity for assessing success on a national licensing exam. However, as others suggest, the value of the Writing Sample and other surrogates of communication probably lie in predicting performance in the clinical years of medical school and beyond. Additional work should include evaluating the predictive validity of the Writing Sample and other pre-medical school measures of communication with widely acceptable measures of performance in clinical settings, including physician-patient communication.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call