Abstract

Short-text classification is an important task in natural language processing (NLP). The classification results are unsatisfactory due to the sparsity of Chinese short texts, insufficient annotation data, single tasks, and classification imbalance problems. Therefore, we propose a method based on a self-attention mechanism to focus on the relationships between attention heads, which can improve the performance of models represented by self-attention for short-text classification tasks. In addition, we designed a text augmentation template based on prompt learning with embedded labels. This allows single-task classification to be transformed into multitask classification while allowing the model to focus on the semantic consistency of the labels with the text. We conducted experiments on the CHNSenticorp, COLD, and SST-2 datasets to achieve better results than several popular text classification methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.