Abstract

Sentiment analysis has become popular when Natural Language Processing algorithms were proven to be able to process complex sentences with good accuracy. Recently, pre-trained language models such as BERT and mBERT, have been shown to be effective for improving language tasks. Most of the work in implementing the models focuses on fine-tuning BERT to achieve desirable results. However, this approach is resource-intensive and requires a long training time, up to a few hours on a GPU, depending on the dataset. Hence, this paper proposes a less complex system with less training time using the BERT model without the fine-tuning process and adopting a feature reduction algorithm to reduce sentence embeddings. The experimental results show that with 50% fewer sentence embeddings, the proposed system improves the accuracy by 1-2% with 71% less training time and 89% less memory usage. The proposed approach has also been proven to work for multilingual tasks by using a single mBERT model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call