Abstract

The solidification of network parameter values for most relation extraction models after training makes the model overconfident in the process of prediction and classification tasks. Moreover, the interference of similar relation in the text data will affect the effect of relation extraction. We propose a relation extraction model based on relation similarity and Bayesian neural network. This model uses the logistic regression loss function to make the training parameters closer to the target relation and away from the similar relation, thereby eliminating the interference of the similar relation in the relation extraction task. In addition, it is also possible to learn a probability distribution from the weights of the Bayesian-LSTM (Bayesian-Long Short-Term Memory) neural network, and retain the corresponding uncertainty on the basis of obtaining long-distance dependence information using LSTM, so that the model learns more data features while performing regularization at the weight level. The model also uses an attention mechanism to pay more attention to useful information. The experimental results on the Wikipedia data set TACRED (TAC Relation Extraction Dataset) data set show that the proposed method effectively improves the effect of the entity relation extraction model.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call