Abstract

The capabilities of deep models are constantly mined for extraction and representation of features among text classification tasks. However, these models are sensitive to changes in input data, resulting in poor robustness. Meanwhile, the model lacks information interaction and weak representation ability. In this work, for feature extraction, a joint model that consists of a convolutional neural network, a bidirectional gated recurrent unit, and an attention mechanism is proposed. This new model can improve versatility and fully discover category information in text. For feature representation, a projector under the supervised contrastive learning method is introduced. The method can improve the representation of an encoder and realize aggregation of the same category. Considering the robustness of the PCRA, the gradient penalty is added to a contrastive loss function. Experiments are performed on four datasets to assess the proposed model (PCRA and PCRA-GP) using an accuracy metric. The experimental results show that our model is suitable for variable-length and bilingual texts. Compared with the baseline model, it remains competitive, and it reaches SOTA on the 20 Newsgroups dataset. Moreover, the performance of the model is evaluated under different hyperparameters to clarify its working mechanism.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.