Abstract

Consumers benefit from reading ratings online before making their purchases, yet this information aggregation process may have some potential problems that was not previously credited in the literature. Through an empirical approach, I showed how people could review businesses inconsistently when their expectations are formed by ratings on crowd-sourced review websites. Using data from Yelp, I tested how potential disappointments may affect customers' reviews by applying a regression discontinuity design to control for unobserved factors that may also simultaneously influence ratings. In addition, I developed a model illustrating rating behaviors with reference-dependent utilities to establish testable hypotheses and showed that comparisons between their true experience and expectation, when consumers write their reviews, could impede their assessment of businesses' qualities and cause attribution bias. After carefully excluding confounding variables, my results support the hypothesis that consumers have attribution bias when they write reviews. Several robustness checks support these findings and shed further light onto this example of attribution bias. This paper links to an emerging literature of attribution bias in economics and provides empirical evidence and implications of attribution bias on online reputation systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call