Abstract

Sentiment analysis is one of the traditional well-known tasks in Natural Language Processing (NLP) research. In recent years, Pre-trained Models (PMs) have become one of the frontiers of NLP, and the knowledge in PMs is usually leveraged to improve machine learning models' performance for a variety of downstream NLP tasks including sentiment analysis. However, there are also some shortcomings in PM-based approaches. For example, many studies pointed out there are gaps between pre-training and fine-tuning. In addition, because of the time-consuming and high-cost data annotation process, the labeled training data are usually precious and scarce, which often leads to the over-fitting of models. The recent advent of prompt learning technology provides a promising solution to the above challenges. In this paper, we first discussed the background of prompt learning and its basic principle. Prompt learning changes the model input by adding templates, allowing learning tasks to adapt actively to pre-trained models, and therefore can promote the innovation and applicability of pre-trained models. Then we investigated the evolution of sentiment analysis and explored the application of prompt learning to different sentiment analysis tasks. Our research and review show that prompt learning is more suitable for sentiment analysis tasks and can achieve good performance. Finally, we also provided some future research directions on prompt-based sentiment analysis. Our survey demonstrated that prompt learning can facilitate the efficient utilization of pre-trained models in sentiment analysis and other tasks, which makes it a new paradigm worthy of further exploration.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call