Abstract

There is increasing interest in understanding potential bias in medical education. We used natural language processing (NLP) to evaluate potential bias in clinical clerkship evaluations. Data from medical evaluations and administrative databases for medical students enrolled in third-year clinical clerkship rotations across two academic years. We collected demographic information of students and faculty evaluators to determine gender/racial concordance (i.e., whether the student and faculty identified with the same demographic). We used a multinomial log-linear model for final clerkship grades, using predictors such as numerical evaluation scores, gender/racial concordance, and sentiment scores of narrative evaluations using the SentimentIntensityAnalyzer tool in Python. 2037 evaluations from 198 students were analyzed. Statistical significance was defined as P < 0.05. Sentiment scores for evaluations did not vary significantly by student gender, race, or ethnicity (P = 0.88, 0.64, and 0.06, respectively). Word choices were similar across faculty and student demographic groups. Modeling showed narrative evaluation sentiment scores were not predictive of an honors grade (odds ratio [OR] 1.23, P = 0.58). Numerical evaluation average (OR 1.45, P < 0.001) and gender concordance between faculty and student (OR 1.32, P = 0.049) were significant predictors of receiving honors. The lack of disparities in narrative text in our study contrasts with prior findings from other institutions. Ongoing efforts include comparative analyses with other institutions to understand what institutional factors may contribute to bias. NLP enables a systematic approach for investigating bias. The insights gained from the lack of association between word choices, sentiment scores, and final grades show potential opportunities to improve feedback processes for students.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call