<span lang="EN-US">Text classification is a pivotal task within natural language processing (NLP), aimed at assigning semantic labels to text sequences. Traditional methods of text representation often fall short in capturing intricacies in contextual information, relying heavily on manual feature extraction. To overcome these limitations, this research work presents the sequential attention fusion architecture (SAFA) to enhance the features extraction. SAFA combines deep long sort-term memory (LSTM) and multi-head attention mechanism (MHAM). This model efficiently preserves data, even for longer phrases, while enhancing local attribute understanding. Additionally, we introduce a unique attention mechanism that optimizes data preservation, a crucial element in text classification. The paper also outlines a comprehensive framework, incorporating convolutional layers and pooling techniques, designed to improve feature representation and enhance classification accuracy. The model's effectiveness is demonstrated through 2-dimensional convolution processes and advanced pooling, significantly improving prediction accuracy. This research not only contributes to the development of more accurate text classification models but also underscores the growing importance of NLP techniques.</span>
Read full abstract