Abstract
Relation extraction is one of the fundamental subtasks of the information extraction. The purpose is to determine the implicit relation between two entities in a sentence. Therefore, Convolutional Neural Networks and Feature Attention-based Prototypical Networks (CNN-Proto-FATT), a typical few-shot learning method, is proposed and achieve competitive performance. However, convolutional neural networks suffer from the insufficient instances of relation in real scenes, leading to undesirable results. To extract long-distance features more comprehensively, the pre-trained model Bidirectional Encoder Representation from Transformers (BERT) is incorporated into CNN-Proto-FATT. In this model, named Bidirectional Encoder Representation from Transformers and Feature Attention-based Prototypical Networks (BERT-Proto-FATT), the multi-head attention helps the network extract semantic features cross long- and short-distance to enhance the encoded representations. Experimental results indicate that BERT-Proto-FATT demonstrates significant improvements on the FewRel dataset.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have