Abstract
Many existing fine-grained sentiment analysis (FGSA) methods have problems such as easy loss of fine-grained information, difficulty in solving polysemy and imbalanced sample categories. Therefore, a Transformer based FGSA method for Weibo comment text is proposed. Firstly, the RoBERTa model with knowledge augmentation was used to dynamically encode the text so as to solving the polysemy issue. Then, BiLSTM is used to effectively capture bidirectional global semantic dependency features. Next, Transformer is used to fuse multi-dimensional features and adaptively strengthen key features to overcome the problem of fine-grained information loss. Finally, an improved Focal Loss function is utilized for training to solve the issue of imbalanced sample categories. As demonstrated by the experimental outcomes on the SMP2020-EWECT, NLPCC 2013 Task 2, NLPCC 2014 Task 1, and weibo_senti_100k datasets, the suggested method outperforms the alternatives for advanced comparison methods.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal of Information Technologies and Systems Approach
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.