Abstract
Existing research on Chinese text classification primarily focuses on classifying data information at different granularities, such as character, word, sentence, and chapter. However, this approach often fails to capture the semantic information embedded in these different levels of granularity. To enhance the extraction of the text’s core content, this study proposes a text classification model that incorporates an attention mechanism to fuse multi-granularity information. The model begins by constructing embedding vectors for characters, words, and sentences. Character and word vectors are generated using the Word2Vec training model, allowing the data to be converted into these respective vectors. To capture contextual semantic features, a bidirectional long and short-term memory network is employed for character and word vectors. Sentence vectors, on the other hand, are processed using the FastText model to extract the features they contain. To extract further important semantic information from the different feature vectors, they are fed into an attention mechanism layer. This layer enables the model to prioritize and emphasize the most significant information within the text. Experimental results demonstrate that the proposed model outperforms both single-granularity classification and combinations of two or more granularities. The model exhibits improved classification accuracy across three publicly available Chinese datasets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.