Abstract

Short-text classification is an important task in natural language processing (NLP). The classification results are unsatisfactory due to the sparsity of Chinese short texts, insufficient annotation data, single tasks, and classification imbalance problems. Therefore, we propose a method based on a self-attention mechanism to focus on the relationships between attention heads, which can improve the performance of models represented by self-attention for short-text classification tasks. In addition, we designed a text augmentation template based on prompt learning with embedded labels. This allows single-task classification to be transformed into multitask classification while allowing the model to focus on the semantic consistency of the labels with the text. We conducted experiments on the CHNSenticorp, COLD, and SST-2 datasets to achieve better results than several popular text classification methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call