Abstract

In e-commerce, spotting fake reviews is vital for ensuring trust among consumers. However, identifying them poses a challenge because fake reviews are often crafted to seem genuine, and the sheer volume makes thorough checks difficult. Prior methods involve basic strategies like grammar checks or pattern analysis, but they fell short due to fake reviews generation methods are becoming increasingly sophisticated. Even machine learning, while helpful, struggle to pinpoint subtle fake reviews accurately. This has led to a shift toward deep learning algorithms, which show promise in handling the complexities that traditional methods cannot manage accurately. Specifically, transformer models like BERT, RoBERTa, and XLNet have emerged as potential solutions. This study evaluates the effectiveness of these models in distinguishing between human-generated and computer-generated reviews. RoBERTa displays high accuracy but requires longer learning periods, while BERT and XLNet offer decent accuracy with varying error rates. The investigation delves into these deep learning models to ascertain their capability to spot fake reviews across different scenarios within online platforms. RoBERTa achieves the highest accuracy among the models, reaching 97.1%. It also demonstrates a lower Type I error rate at 2.2%, although its Type II error remains at a moderate level.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.