Abstract

Abstract Consumers, businesses and organisations rely on others’ ratings of items when making choices. However, individual reviewers vary in their accuracy and some are biased—either systematically over- or under-rating items relative to others’ tastes, or even deliberately distorting a rating. We describe how to process ratings by a group of reviewers over a set of items and evaluate the individual reviewers’ accuracies and biases, in a way that yields unbiased and consistent estimates of the items’ true qualities. We provide Monte Carlo simulations that showcase the added value of our technique even with small data sets, and we show that this improvement increases as the number of items increases. Revisiting the famous 1976 wine tasting that compared Californian and Bordeaux wines, accounting for the substantial variation in reviewers’ biases and accuracies results in a ranking that differs from the original average rating. We also illustrate the power of this methodology with an application to more than 45,000 ratings of ‘en primeur’ Bordeaux fine wines by expert critics. Those data show that our estimated wine qualities significantly predict prices when controlling for prominent experts’ ratings and numerous fixed effects. We also find that the elasticity of a wine price in an expert’s ratings increases with that expert’s accuracy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.