Abstract

Abstract Deceptive review on online platforms is the bitter reality in the present day. These reviews are specially written to encourage or downgrade specific services, brands, and products. There are lots of work has already been done on detection of these type of reviews. In all this work, either review text features or review metadata features to identify the review. The former provides detailed information as many features can be extracted from it, such as duplicate text writing techniques, sentiments, and common words. However, with growing technologies such as article rewriter and article spinner, these features can be easily tricked by deceptive reviewers. Moreover, there are no labeled datasets available for such classification problems. This research article solved the issue referred to above by firstly creating a labeled dataset. After developing a synonyms-based n-grams approach to extract features of review text and finally effectiveness, these features are tested against some state of art feature extracting techniques for different classifiers such as SVM and Naive Bayes classifier. The evaluation results show that the state-of-art technique significantly fails to detect deceptive reviews rewritten by software. The proposed method performs better for software rewritten deceptive review detection and reduces the length of feature vectors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call