Abstract

Domain tuning pre-trained language models (PLMs) with task-specific prompts have achieved great success in different domains. By using cloze-style language prompts to stimulate the versatile knowledge of PLMs, which directly bridges the gap between pre-training tasks and various downstream tasks. Large unlabelled corpora in the biomedical domain have been created in the last decade(i.e., PubMed, PMC, MIMIC, and ScienceDirect). In this paper, we introduce BioKnowPrompt, a prompt-tuning PLMs model that has been incorporating imprecise knowledge into verbalizer for biomedical text relation extraction. In particular, we use learnable words and learnable relation words to infuse entity and relation information into quick creation, and we use biomedical domain knowledge constraints to synergistically improve their representation. By using additional prompts to fine-tune PLMs, we can further stimulate the rich knowledge distributed in PLMs to better serve downstream tasks such as relation extraction. BioKnowPrompt has a lot of significant potential in few-shot learning, which outperforms the previous models and achieves state-of-the-art on the 5 datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call