Abstract

Inter- and intra-observer variability is a concern for medical school admissions. Artificial intelligence (AI) may present an opportunity to apply a fair standard to all applicants systematically and yet maintain sensitivity to nuances that have been a part of traditional screening methods. Data from 5 years of medical school applications were retrospectively accrued and analyzed. The applicants (m = 22 258 applicants) were split 60%-20%-20% into a training set (m = 13 354), validation set (m = 4452), and test set (m = 4452). An AI model was trained and evaluated with the ground truth being whether a given applicant was invited for an interview. In addition, a "real-world" evaluation was conducted simultaneously within an admissions cycle to observe how it would perform if utilized. The algorithm had an accuracy of 95% on the training set, 88% on the validation set, and 88% on the test set. The area under the curve of the test set was 0.93. The SHapely Additive exPlanations (SHAP) values demonstrated that the model utilizes features in a concordant manner with current admissions rubrics. By using a combined human and AI evaluation process, the accuracy of the process was demonstrated to be 96% on the "real-world" evaluation with a negative predictive value of 0.97. These results demonstrate the feasibility of an AI approach applied to medical school admissions screening decision-making. Model explainability and supplemental analyses help ensure that the model makes decisions as intended.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call