Abstract

Electronic consult (eConsult) systems allow specialists more flexibility to respond to referrals more efficiently, thereby increasing access in under-resourced healthcare settings like safety net systems. Understanding the usage patterns of eConsult system is an important part of improving specialist efficiency. In this work, we develop and apply classifiers to a dataset of eConsult questions from primary care providers to specialists, classifying the messages for how they were triaged by the specialist office, and the underlying type of clinical question posed by the primary care provider. We show that pre-trained transformer models are strong baselines, with improving performance from domain-specific training and shared representations.

Highlights

  • Electronic consult systems allow primary care providers (PCPs) to send short messages to specialists when they require specialist input

  • Domain-specific BERT models have been released, including BioBERT (Lee et al, 2020), which started from a BERT checkpoint and extended pre-training on biomedical journal articles, SciBERT (Beltagy et al, 2019), which is pre-trained from scratch with its own vocabulary, and ClinicalBERT (Alsentzer et al, 2019) which started from BERT checkpoints and extended pretraining using intensive care unit documents from the MIMIC corpus (Johnson et al, 2016)

  • Results of the support vector machine (SVM) with linear kernel and a few fine-tuned BERT models show that training across all consults results in poor performance (Table 1)

Read more

Summary

Introduction

Electronic consult (eConsult) systems allow primary care providers (PCPs) to send short messages to specialists when they require specialist input. These questions are much shorter than, say, electronic health record texts. Domain-specific BERT models have been released, including BioBERT (Lee et al, 2020), which started from a BERT checkpoint and extended pre-training on biomedical journal articles, SciBERT (Beltagy et al, 2019), which is pre-trained from scratch with its own vocabulary, and ClinicalBERT (Alsentzer et al, 2019) which started from BERT checkpoints and extended pretraining using intensive care unit documents from the MIMIC corpus (Johnson et al, 2016). We use vanilla BERT, SciBERT, and two versions of ClinicalBERT, Bio+Clinical BERT and Bio+Discharge Summary BERT1

Objectives
Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.