Abstract

The boom in wireless networking technology has led to an exponential increase in the number of web comments. Therefore, sentiment analysis of web comments is vital, and aspect-based sentiment analysis(ABSA) is very useful for the sentiment feature extraction of web comments. Currently, context-dependent sentiment feature typically derives from recurrent neural networks (RNN), and an average target vector usually replaces the target vector. However, web comments have become increasingly complex, and RNN may lose some essential sentiment information. At the same time, the average target vector may be the wrong target feature. We propose a new Transformer based memory network (TF-MN) to correct the shortcomings of the previous method. In TF-MN, the task becomes the question-answering process, which optimizes context, question, and the memory module. We use a global self-attention mechanism and a local attention mechanism (memory network) to construct emotionally inclined web comment semantics. Since self-attention can only obtain global semantic links, words such as nouns, prepositions, and adverbs still affect the emotional extraction of comments. To shield the influence of unrelated vocabulary on classification, we propose to use improved memory networks to optimize the extraction of web comments semantics. We conduct experiments on two datasets, and experimental results show that our model exceeds the state-of-the-art model.

Highlights

  • In the future, there are billions of devices connected to the Internet, and data surround us. [1], [2] So faster and more reliable data processing becomes critical

  • Based on the above analysis, we propose a memory network model based on the Transformer

  • We propose a Transformer-based memory network model (TF-MN) whose sentiment question module treats each target in the text as an implicit question about ‘‘What is the emotional tendency of the target in the text?’’ Figure 1 is a flow chart of the Transformer based memory network (TF-MN) model

Read more

Summary

INTRODUCTION

There are billions of devices connected to the Internet, and data surround us. [1], [2] So faster and more reliable data processing becomes critical. Reference [31] proposes an LSTM model based on segmentation attention(SA-LSTM-P), which can effectively capture the structural dependence between target and emotional expression through the linear-chain conditional random field (CRF) layer. This model simulates the process by which humans infer emotional information while reading. It mainly performs the deformation and softmax operations on the results of the association calculation of each word, and combines the results of each round of the self-attention mechanism to obtain a more comprehensive text feature representation. The position of the word is encoded by the sin and cos functions of different frequencies

THE PROPOSED MODEL
MEMORY MODULE
EVALUATION
Findings
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call