Abstract

Pre-trained Language Models (PLMs) have demonstrated remarkable performance in Natural Language Understanding (NLU) tasks, with continuous prompt-based fine-tuning further enhancing their capabilities. However, current methods rely on hand-crafted discrete prompts to initialize continuous prompts, which are sensitive to subtle changes and inherently limited by the constraints of natural language. To address these limitations, this study introduces an innovative AutoPrompt-based Prompt Tuning (APT) approach. APT optimizes the initialization of continuous prompts by employing a gradient-guided automatic search to generate ideal discrete templates and identify trigger tokens. As the semantic features are already captured from the target task dataset, the continuous parameters initialized by trigger tokens are highly relevant, providing a superior starting point for prompt-tuning. APT searches for optimal prompts across various NLU tasks, enabling the PLM to learn task-related knowledge effectively. The APT method significantly improves PLM performance in both few-shot and fully supervised settings, eliminating the need for extensive prompt engineering. In the knowledge exploration (Language Model Analysis (LAMA)) benchmark, APT achieved a remarkable 58.6% (P@1) performance without additional text, representing a 3.6% improvement over the previous best result. Additionally, APT outperformed state-of-the-art methods in the SuperGLUE benchmark.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call