Abstract

Relation classification task is to predict the relation between the entity pair in a given sentence. Most of these sentences have certain words or schema that can help to extract the relationships of entity pairs. However, there are some sentences do not have such structure, they require the model to have certain reasoning capability to predict relation correctly, we call them “reasoning instances”: BERT [1] is a well known pre-trained language model, which can learn text representation and has already performed well on various tasks of NLP. In this paper, we are intended to explore the reasoning capability of BERT in reasoning instances. We first propose a BERT-based relation classification model to test whether BERT could infer the relation between entities in reasoning instances correctly. Further we explore what kind of information would help BERT to predict the relation of these instances. Through various comparison experiment, we conclude that BERT can not infer the relation between entities by the meaning of the sentence, it mainly uses the concept information about the entity itself and the information learned on previous instances to help the model to do relation classification. The conclusion inspires us that BERT can serve to predict the relation between entity pairs defined by multiple sentences.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call