Abstract

Knowledge extraction includes the extraction of entity relationships, which is crucial. Deep learning algorithms are increasingly prevalent in relation extraction tasks when compared to conventional pattern recognition techniques. The majority of current research on remotely supervised methods and kernel functions in relationship extraction strategies for Chinese language cannot disregard the detrimental effects of noisy data in the dataset on the experimental findings. A bidirectional GRU neural network, a two-layer attention mechanism, and a Chinese relationship extraction model are proposed. For the forgetfulness problem, a bidirectional GRU neural network is utilised to fuse the input vectors. Integrating the structural properties of the Mandarin language, word vectors are employed as the input. Sentence-level attention mechanisms are utilised to extract sentence features, and word-level feature information is retrieved from a sentence. About 1300 bits of data are extracted from news websites using a remotely supervised methodology for validation. According to the experimental findings, the neural network model with a two-layer attention mechanism is able to fully utilise all of the feature information in a sentence, and it performs significantly better than the neural network model without an attention mechanism in terms of accuracy and recall rates. Compared to the model without the attention mechanism, the accuracy and recall rates are noticeably greater.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call