Abstract

Text Sentiment Classification (TSC) is an important task in the field of Natural Language Processing (NLP). Previous TSC methods cannot dynamically obtain the semantics of words based on the context, which leads to deviations in the results of TSC. However, the proposal and application of Bidirectional Encoder Representation from Transformers (BERT) has greatly improved the accuracy of TSC tasks. BERT can dynamically perceive contextual information to obtain more accurate classification results. In this paper, we build different BERT-based models and conduct our experiments on the benchmark dataset Internet Movie Database (IMDB). The experimental result shows that even with a simple linear layer, our BERT-based model still improves the F1-score by 2.01% compared with the best performing model in the baselines.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.