Personal implicit biases may contribute to inequitable health outcomes, but the mechanisms of these effects are unclear at a system level. This study aimed to determine whether stigmatizing subjective terms in electronic medical records (EMR) reflect larger societal racial biases. A cross-sectional study was conducted using natural language processing software of all documentation where one or more predefined stigmatizing words were used between January 1, 2019 and June 30, 2021. EMR from emergency care and inpatient encounters in a metropolitan healthcare system were analyzed, focused on the presence or absence of race-based differences in word usage, either by specific terms or by groupings of negative or positive terms based on the common perceptions of the words. The persistence ("stickiness") of negative and/or positive characterizations in subsequent encounters for an individual was also evaluated. Final analyses included 12,238 encounters for 9135 patients, ranging from newborn to 104years old. White (68%) vs Black/African American (17%) were the analyzed groups. Several negative terms (e.g., noncompliant, disrespectful, and curse words) were significantly more frequent in encounters with Black/African American patients. In contrast, positive terms (e.g., compliant, polite) were statistically more likely to be in White patients' documentation. Independent of race, negative characterizations were twice as likely to persist compared with positive ones in subsequent encounters. The use of stigmatizing language in documentation mirrors the same race-based inequities seen in medical outcomes and larger sociodemographic trends. This may contribute to observed healthcare outcome differences by disseminating one's implicit biases to unknown future healthcare providers.
Read full abstract