In recent years, the continuous increase in the growth of text data on social media has been a major reason to rely on the pre-training method to develop new text classification models specially transformer-based models that have proven worthwhile in most natural language processing tasks. This paper introduces a new Position-Context Additive transformer-based model (PCA model) that consists of two-phases to increase the accuracy of text classification tasks on social media. Phase I aims to develop a new way to extract text characteristics by paying attention to the position and context of each word in the input layer. This is done by integrating the improved word embedding method (the position) with the developed Bi-LSTM network to increase the focus on the connection of each word with the other words around it (the context). As for phase II, it focuses on the development of a transformer-based model based primarily on improving the additive attention mechanism. The PCA model has been tested for the implementation of the classification of health-related social media texts in 6 data sets. Results showed that performance accuracy was improved by an increase in F1-Score between 0.2 and 10.2% in five datasets compared to the best published results. On the other hand, the performance of PCA model was compared with three transformer-based models that proved high accuracy in classifying texts, and experiments also showed that PCA model overcame the other models in 4 datasets to achieve an improvement in F1-score between 0.1 and 2.1%. The results also led us to conclude a direct correlation between the volume of training data and the accuracy of performance as the increase in the volume of training data positively affects F1-Score improvement.
Read full abstract