Abstract
Medical text classification is one of the primary steps of health care automation. Diagnosing disease at the right time, and going to the right doctor is important for patients. To do that, two types of medical texts were classified into some medical specialties in this study. The first one is the keywords-based medical notes and the second one is the prescriptions. There are many methods and techniques to classify texts from any domain. But, textual resources of a specific domain can be inadequate to build a sustainable and accurate classifier. This problem can be solved by incorporating transfer learning. The objective of this study is to analyze the prospects of transfer learning in medical text classification. To do that, a transfer learning system has been created for classification tasks by fine-tuning Bidirectional Encoder Representations from Transformers aka the BERT language model, and its performance has been compared with three deep learning models - multi-layer perceptron, long short-term memory, and convolutional neural network. The fine-tuned BERT model has shown the best performance among all the other models in both classification tasks. It has 0.84 and 0.96 weighted f1-score in classifying medical notes and prescriptions respectively. This study has proved that transfer learning can be used in medical text classification, and significant improvement in performance can be achieved through it.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.