Abstract
The growing sophistication of deep learning technology has driven advances in automated processing of medical texts. Applying deep learning technology to medical text classification can reduce the degree of manual intervention and improve the classification accuracy of medical texts. Therefore, this paper designs and proposes BGA and B2GA models based on BioBERT. The models use BioBERT as the word embedding layer, and use bidirectional GRU (BiGRU) and multi-head self-attention mechanism to extract feature vectors. Experimental verification: In the medical field, the classification effect of the BioBERT model is more accurate than the general BERT, and the accuracy rate is increased by about 38%; For datasets with fewer entries, BiGRU is more suitable than Bidirectional LSTM (BiLSTM); The BGA model proposed in this paper has an accuracy of more than 72% for automatic classification of medical texts, and the proposed B2GA model has an accuracy of more than 70% for automatic classification of medical texts. Compared with the BioBERT (proposed by Korea University) and Bio-ClinicalBERT model, they have a better text classification effect.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.