Aspect-based sentiment analysis (ABSA) deals with understanding sentiments about specific attributes. Aspect term extraction (ATE) plays an essential role in aspect sentiment classification (ASC) by specifying the main aspects for sentiment accuracy. This paper introduces a dual-task framework for ATE and ASC models in which, the ATE model uses the imbalanced maximized-area under the curve proximate support vector machine (ImAUC-PSVM) method to mitigate the imbalance problem and the output of ATE serves as the reward in the training process of ASC supported by RL (reinforcement learning). The ImAUC-PSVM uses the conventional PSVM (proximate support vector machine) strength for sentiment tasks and the AUC (area under the curve) optimization for imbalance addressing. The ASC model employs multiple kernels for word embedding, with a transductive long short-term memory (TLSTM) for sentence representation that considers samples near the test point for better model refinement. The policy of the RL-based training strategy of the ASC model is represented by a multilayer perceptron (MLP) that categorizes sentences into positive, negative and neutral sentiments. In this RL context, aspect terms serve as rewards in the training process. The scope loss function (SLF) is also integrated within RL to rectify dataset imbalances. To optimize ATE and ASC models, an artificial bee colony (ABC) technique is incorporated for hyperparameter optimization. Experiments and evaluations conducted on English (Restaurant and Laptop) and Hindi datasets show the superiority of the proposed model over existing models of the literature. Moreover, transfer learning (TL) is applied to a Twitter dataset to assess its effect on ATE. The F-measures of the ATE model are obtained as 90.12 and 84.10 and the accuracies of the ASC model are calculated as 84.10 and 82.02 on Restaurant and Laptop datasets, respectively.
Read full abstract