Medical texts are rich in specialized knowledge and medical information. As the medical and healthcare sectors are becoming more digitized, many medical texts must be effectively harnessed to derive insights and patterns. Thus, great attention is directed to this emerging research area. Generally, natural language processing (NLP) algorithms are employed to extract comprehensive information from unstructured medical texts, aiming to construct a graphical database for medical knowledge. One of the needs is to optimize model sizes while maintaining the precision of the BART algorithm. A novel carefully designed algorithm, called ALBERT+Bi-LSTM+CRF, is introduced. In this way, both enhanced efficiency and scalability are attained. When entities are extracted, the constructed algorithm achieves 91.8%, 92.5%, and 94.3% for the F-score, precision, and recall, respectively. The proposed algorithm also achieves remarkable outcomes in extracting relations, with 88.3%, 88.1%, and 88.4% for the F-score, precision, and recall, respectively. This further underscores its practicality in the graphical construction of medical knowledge.
Read full abstract