Trauma is the leading cause of death in adults under the age of 45 and the fourth leading cause of death in the United States. Effective delivery of trauma care centers on being well versed in the Advanced Trauma Life Support (ATLS) protocol, which requires high levels of clinical experience. Often this comes from having been exposed to the many permutations of common types of injuries as well as exposed to rarer scenarios, but with potential harm to patients. Case scenarios, which are sequential representations of clinical events, can help trainees receive clinical exposure without harming patients. However authoring case scenarios requires domain expertise, wide experience, and the ability to intelligently respond to inputs, and as such is currently an arduous task. Autoregressive generative models trained on large amounts of clinical data, such as the National Trauma Data Bank (NTDB), pose a possible solution to overcome the cost of authorship while providing broad and accessible clinical experience to trainees. We have developed a Trauma AI model composed of an autoregressive generative model based on the transformer architecture for generating potential case scenario combined with an out-of-domain detection for filtering out less plausible scenarios. The GPT2 model is trained on 1.1 million case scenarios derived from the NTDB data. We demonstrate that Trauma AI is capable of generating realistic case scenarios that encode the ATLS protocol as a latent feature of the sequence of provider interventions, including scenarios that do not have any parallels in the original dataset. We also present an unsupervised means of filtering out unrealistic sequences by identifying out-of-domain sequences, and demonstrate that this improves the realism of the generated case scenarios.
Read full abstract