ObjectiveThis study explores radiology program directors’ perspectives on the impact of large language model (LLM) use among residency applicants to craft personal statements. MethodsEight program directors from the Radiology Residency Education Research Alliance participated in a mixed-methods study, which included a survey regarding impressions of artificial intelligence (AI)-generated personal statements and focus group discussions (July 2023). Each director reviewed four personal statement variations for five applicants, anonymized to author type: the original and three Chat Generative Pre-trained Transformer-4.0 (GPT) versions generated with varying prompts, aggregated for analysis. A 5-point Likert scale surveyed the writing quality, including voice, clarity, engagement, organization, and perceived origin of each statement. An experienced qualitative researcher facilitated focus group discussions. Data analysis was performed using a rapid analytic approach with a coding template capturing key areas related to residency applications. ResultsGPT-generated statement ratings were more often average or worse in quality (56%, 268 of 475) than ratings of human-authored statements (29%, 45 of 160). Although reviewers were not confident in their ability to distinguish the origin of personal statements, they did so reliably and consistently, identifying the human-authored personal statements at 95% (38 of 40) as probably or definitely original. Focus group discussions highlighted the inevitable use of AI in crafting personal statements and concerns about its impact on the authenticity and the value of the personal statement in residency selections. Program directors were divided on the appropriate use and regulation of AI. DiscussionRadiology residency program directors rated LLM-generated personal statements as lower in quality and expressed concern about the loss of the applicant’s voice but acknowledged the inevitability of increased AI use in the generation of application statements.
Read full abstract