E-commerce reviews are becoming more valued by both customers and companies. The high demand for sentiment analysis is driven by businesses relying on it as a crucial tool to improve product quality and make informed decisions in a fiercely competitive business environment. The purpose of this review paper is to explore and evaluate the applications of the BERT model, a Natural Language Processing (NLP) technique, in sentiment analysis across various fields. The model has been utilized in certain studies for various languages, restaurant businesses, agriculture, Automated Essay Scoring (AES), Twitter, and Google Play. The BERT model's fine-tuning steps involve using pre-trained BERT to perform various language understanding tasks. Text pre-processing is conducted to clean up the data and convert it to numbers before feeding it into BERT, which generates vectors for each input token. We found that BERT outperformed the norm on a range of general language understanding tasks, including sentiment analysis, paraphrase recognition, question-answering, and linguistic acceptability. The detection of neutral reviews and the presence of false reviews in the dataset are two problems that have an impact on the model's accuracy. Training is also slow because it is huge and there are many weights to update. Additional research could be conducted to improve the BERT model's accuracy by constructing a false review categorization model and providing more training to the model in recognizing neutral reviews. Doi: 10.28991/HIJ-2023-04-02-015 Full Text: PDF