Abstract

In this paper, we examine the utility of script analysis for predicting the helpfulness of online customer reviews. We employ the lens of cognitive scripts and posit that people share a cognitive script for what constitutes a helpful review in a given domain. Conceptually, a script includes the salient elements that readers look for before determining whether a review is helpful. To operationalize the construct of cognitive script, we seek the help of human annotators and ask them to highlight phrases that they believe are important for determining review helpfulness. The words in the annotated phrases are collected and become part of the script lexicon for a given domain. The lexicon entries represent the shared conception of essential elements, which are key to the evaluation of review helpfulness. We employ the words in the script lexicon as features in a text regression model to predict review helpfulness. Furthermore, we develop and empirically validate a new approach for combining script analysis and dimension reduction. The purpose of the study is to propose a new method to predict review helpfulness and to evaluate the effectiveness and efficiency of the scripts-enriched model. To demonstrate the efficacy of the scripts-enriched model, we compare it with benchmark models – a Baseline model and a bag-of-words (BOW) model. The results show that the scripts-enriched text regression model not only produces the highest accuracy, but also the lowest training, testing, and feature selection times.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call