Abstract

Letters of reference (LORs) play an important role in postgraduate residency applications. Human-written LORs have been shown to carry implicit gender bias, such as using more agentic versus communal words for men, and more frequent doubt-raisers and references to appearance and personal life for women. This can result in inequitable access to residency opportunities for women. Given the known gendered language often unconsciously inserted into human-written LORs, we sought to identify whether LORs generated by artificial intelligence exhibit gender bias. Observational study. Multicenter academic collaboration. Prompts describing identical men and women applying for Otolaryngology residency positions were created and provided to ChatGPT to generate LORs. These letters were analyzed using a gender-bias calculator which assesses the proportion of male- versus female-associated words. Regardless of the gender, school, research, or other activities, all LORs generated by ChatGPT showed a bias towardmale-associated words. There was no significant difference between the percentage of male-biased words in letters written for women versus men (39.15 vs37.85, P = .77). There were significant differences in gender bias found by each of the other discrete variables (school, research, and other activities) chosen. While ChatGPT-generated LORs all showed a male bias in the language used, there was no gender bias difference in letters produced using traditionally masculine versus feminine names and pronouns. Other variables did induce gendered language, however. ChatGPT is a promising tool for LOR drafting, but users must be aware of potential biases introduced or propagated through these technologies.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call