Abstract

Existing research on Chinese text classification primarily focuses on classifying data information at different granularities, such as character, word, sentence, and chapter. However, this approach often fails to capture the semantic information embedded in these different levels of granularity. To enhance the extraction of the text’s core content, this study proposes a text classification model that incorporates an attention mechanism to fuse multi-granularity information. The model begins by constructing embedding vectors for characters, words, and sentences. Character and word vectors are generated using the Word2Vec training model, allowing the data to be converted into these respective vectors. To capture contextual semantic features, a bidirectional long and short-term memory network is employed for character and word vectors. Sentence vectors, on the other hand, are processed using the FastText model to extract the features they contain. To extract further important semantic information from the different feature vectors, they are fed into an attention mechanism layer. This layer enables the model to prioritize and emphasize the most significant information within the text. Experimental results demonstrate that the proposed model outperforms both single-granularity classification and combinations of two or more granularities. The model exhibits improved classification accuracy across three publicly available Chinese datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call