Abstract
Soft prompt tuning significantly enhances the performance of pre-trained language models, especially in complex tasks where abundant annotated data is available. Crowdsourcing provides a cost-effective means of obtaining large-scale annotations; however, it can hinder the effectiveness of soft prompt tuning due to varying annotation criteria among different annotators, introducing data noise and degrading performance. To address this issue, we conceptualise annotations from each annotator as a subtask and frame crowdsourcing learning as multitask transfer learning. We propose a novel soft prompt tuning method utilising personalised prompts designed to capture the principles of individual annotators through a knowledge distillation approach. To validate our hypothesis, we apply our method across four benchmark datasets in two specific crowdsourcing tasks: crowdsourced named entity recognition (CNER) and crowdsourced relation extraction (CRE). Our personalised soft prompt method shows significant improvements, with average increases of 8.96% in CNER and 14.44% in CRE compared to the standard soft prompt tuning method, while also achieving competitive results against state-of-the-art crowdsourcing methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.