Abstract

Organizational research increasingly relies on online review data to gauge perceived valuation and reputation of organizations and products. Online review platforms typically collect ordinal ratings (e.g., 1 to 5 stars); however, researchers often treat them as a cardinal data, calculating aggregate statistics such as the average, the median, or the variance of ratings. In calculating these statistics, ratings are implicitly assumed to be equidistant. We test whether star ratings are equidistant using reviews from two large-scale online review platforms: Amazon.com and Yelp.com. We develop a deep learning framework to analyze the text of the reviews in order to assess their overall valuation. We find that 4 and 5-star ratings, as well as 1 and 2-star ratings, are closer to each other than 3-star ratings are to 2 and 4-star ratings. An additional online experiment corroborates this pattern. Using simulations, we show that the distortion by non-equidistant ratings is especially harmful in cases when organizations receive only a few reviews and when researchers are interested in estimating variance effects. We discuss potential solutions to solve the issue with rating non-equidistance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call