Abstract

Research on machine assisted text analysis follows the rapid development of digital media, and sentiment analysis is among the prevalent applications. Traditional sentiment analysis methods require complex feature engineering, and embedding representations have dominated leaderboards for a long time. However, the context-independent nature limits their representative power in rich context, hurting performance in Natural Language Processing (NLP) tasks. Bidirectional Encoder Representations from Transformers (BERT), among other pre-trained language models, beats existing best results in eleven NLP tasks (including sentence-level sentiment classification) by a large margin, which makes it the new baseline of text representation. As a more challenging task, fewer applications of BERT have been observed for sentiment classification at the aspect level. We implement three target-dependent variations of the BERT base model, with positioned output at the target terms and an optional sentence with the target built in. Experiments on three data collections show that our TD-BERT model achieves new state-of-the-art performance, in comparison to traditional feature engineering methods, embedding-based models and earlier applications of BERT. With the successful application of BERT in many NLP tasks, our experiments try to verify if its context-aware representation can achieve similar performance improvement in aspect-based sentiment analysis. Surprisingly, coupling it with complex neural networks that used to work well with embedding representations does not show much value, sometimes with performance below the vanilla BERT-FC implementation. On the other hand, incorporation of target information shows stable accuracy improvement, and the most effective way of utilizing that information is displayed through the experiment.

Highlights

  • The size of digital media is growing at an exploding speed, which makes information consumption a challenging task

  • OUR APPROACH we introduce our method for target-dependent sentiment classification, which is based on Bidirectional Encoder Representations from Transformers (BERT)

  • BERT has displayed its great advantage of text representation in many Natural Language Processing (NLP) tasks, including sentence-level sentiment classification

Read more

Summary

INTRODUCTION

The size of digital media is growing at an exploding speed, which makes information consumption a challenging task. Machine assisted media processing is valuable for many recipients, including governments, companies and individuals, while its applications include stock price prediction, product recommendation, opinion poll, etc All these require accurate extraction of main entities, together with opinions or attitudes expressed by the author. Traditional target-dependent sentiment classification focuses on feature engineering to get the most out of a classifier (e.g., Support Vector Machines) [15], [18], [42]. Such methods need laborious feature engineering work and/or massive linguistic resources, which are time-consuming, error-prone and require extensive domain knowledge from experts. Incorporation of target information is a key factor in BERT’s performance improvement, and we show several simple but effective strategies to implement that

RELATED WORK
PROBLEM DEFINITION AND NOTATIONS
TARGET-DEPENDENT BERT
TWO VARIANTS
CONCLUSION

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.