Abstract

BackgroundEntity relation extraction technology can be used to extract entities and relations from medical literature, and automatically establish professional mapping knowledge domains. The classical text classification model, convolutional neural networks for sentence classification (TEXTCNN), has been shown to have good classification performance, but also has a long-distance dependency problem, which is a common problem of convolutional neural networks (CNNs). Recurrent neural networks (RNN) address the long-distance dependency problem but cannot capture text features at a specific scale in the text.MethodsTo solve these problems, this study sought to establish a model with a multi-scale convolutional recurrent neural network for Sentence Classification (TEXTCRNN) to address the deficiencies in the 2 neural network structures. In entity relation extraction, the entity pair is generally composed of a subject and an object, but as the subject in the entity pair of medical literature is always omitted, it is difficult to use this coding method to obtain general entity position information. Thus, we proposed a new coding method to obtain entity position information to re-establish the relationship between subject and object and complete the entity relation extraction.ResultsBy comparing the benchmark neural network model and 2 typical multi-scale TEXTCRNN models, the TEXTCRNN [bidirectional long- and short-term memory (BiLSTM)] and TEXTCRNN [double-layer stacking gated recurrent unit (GRU)], the results showed that the multi-scale CRNN model had the best F1 value performance, and the TEXTCRNN (double-layer stacking GRU) was more capable of entity relation classification when the same entity word did not belong to the same entity relation.ConclusionsThe experimental results of the entity relation extraction from Pharmacopoeia of the People’s Republic of China—Guidelines for Clinical Drug Use—Volume of Chemical Drugs and Biological Products showed that entity relation extraction could effectively proceed using the new labeling method. Additionally, compared to typical neural network models, including the TEXTCNN, GRU, and BiLSTM, the multi-scale convolutional recurrent neural network structure had advantages across several evaluation indicators.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.