Abstract

The amount of data in some fields are scarce because they are difficult or expensive to obtain. The general practice is to pre-train a model on similar data sets and fine-tune the models in downstream tasks by transfer learning. The pre-trained models could learn the general language representation from large-scale corpora but their downstream task may be different from the pre-trained tasks in form and type. It also lacks related semantic knowledge. Therefore, we propose PK-BERT—Knowledge Enhanced Pre-trained Models with Prompt for Few-shot Learning. It (1) achieves few-shot learning by using small samples with pre-trained models; (2) constructs the prefix that contains the masked label to shorten the gap between downstream task and pre-trained task; (3) uses the explicit representation to inject knowledge graph triples into the text to enhance the sentence information; and (4) uses masked language modelling (MLM) head to convert the classification task into generation task. The experiments show that our proposed model PK-BERT achieves better results.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.