Abstract

Traditional deep learning-based strategies for sentiment analysis rely heavily on large-scale labeled datasets for model training, but these methods become less effective when dealing with small-scale datasets. Fine-tuning large pre-trained models on small datasets is currently the most commonly adopted approach to tackle this issue. Recently, prompt-based learning has gained significant attention as a promising research area. Although prompt-based learning has the potential to address data scarcity problems by utilizing prompts to reformulate downstream tasks, the current prompt-based methods for few-shot sentiment analysis are still considered inefficient. To tackle this challenge, an adaptive prompt-based learning method is proposed, which includes two aspects. Firstly, an adaptive prompting construction strategy is proposed, which can capture the semantic information of texts by utilizing a dot-product attention structure, improving the quality of the prompt templates. Secondly, contrastive learning is applied to the implicit word vectors obtained twice during the training stage to alleviate over-fitting in few-shot learning processes. This improves the model’s generalization ability by achieving data enhancement while keeping the semantic information of input sentences unchanged. Experimental results on the ERPSTMT datasets of FewCLUE demonstrate that the proposed method have great ability to construct suitable adaptive prompts and outperforms the state-of-the-art baselines.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call