Abstract

The conversational chatbot ChatGPT has attracted significant attention from both the media and researchers due to its potential applications, as well as concerns surrounding its use. This study evaluates ChatGPT’s efficacy in healthcare education, focusing on the inclusivity of its language. Person-first language which prioritizes the individual over their medical condition, is an important component of inclusive language in healthcare.The aim of the present study was to test ChatGPT’s responses to non-inclusive, non-patient-first, judgmental, and often offensive language inputs. Provocative phrases based on a list of “do not use” recommendations for inclusive language were selected and used to formulate input questions. The occurrences of each provocative phrase or its substitute(s) within the responses generated by ChatGPT were counted to calculate the Person-First Index, which measures the percentage of person-first language.The study reveals that ChatGPT avoids using judgmental or stigmatized phrases when discussing mental health conditions, instead using alternative person-first language that focuses on individuals rather than their conditions, both in response to questions and in correcting English grammar. However, ChatGPT exhibits less adherence to person-first language in responses related to physiological medical conditions or addictions, often mirroring the language of the inputs instead of adhering to inclusive language recommendations. The chatbot used person-first language more frequently when referring to “people” rather than "patients."In summary, the findings show that despite the controversy surrounding its use, ChatGPT can contribute to promoting more respectful language, particularly when discussing mental health conditions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call