Abstract
In the age of social networks, the number of tweets sent by users has led to a sharp rise in public opinion. Public opinions are closely related to user stances. User stance detection has become an important task in the field of public opinion. However, previous studies have not distinguished between user viewpoints and stances. These studies usually detected stance from the perspective of the tweet level but rarely the user level. Therefore, in this paper, we defined user stance, which is the user viewpoint (support, oppose, and neutral) toward the entire target event process. On this basis, we put forward a user stance detection method based on external commonsense knowledge (such as SenticNet) and environment information (such as a user’s historical tweets, topic information, and neighbor tweets) and denote this method as ECKEI. First, in order to better integrate external commonsense knowledge into the neural network, we improved BiLSTM and called it CK-BiLSTM for complementary commonsense information to the memory cell. Secondly, we used LDA to extract the topic of user tweets and designed a topic-driven module to capture the users’ neighbors’ information. Finally, we used the attention mechanism to integrate information from users’ historical tweets and neighbors’ tweets obtained through topic information; then, we used the softmax layer to classify user stances into the support, neutral and oppose classes. In this paper, we conducted experiments and assessments on datasets containing information on Brexit and the elections to verify the practicability and effectiveness of our proposed method. Extensive experimental results on the Brexit and elections datasets show that our approach outperforms six baseline methods (SVM-ngram, NB, MTTRE (RNN), Pkudblab (CNN), TAAT, and Aff-feature). We use the average micro-F1 and average accuracy to measure performance on the detection of a user’s stance. The ECKEI model makes improvements of 4.30–16.89% and 1.22–16.58% on the Brexit and election datasets, respectively, in terms of average micro-F1. Meanwhile, this model makes improvements of 4.24–17.46% and 0.48–14.64% on the Brexit and election datasets, respectively, in terms of average accuracy. Our model makes improvements of 5.34–17.30% and 2.65–19.73%, respectively, on the Brexit and election datasets in terms of average recall.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.