Abstract
Probability forecasters who are rewarded via a proper scoring rule may care not only about the score, but also about their performance relative to other forecasters. We model this type of preference and show that a competitive forecaster who wants to do better than another forecaster typically should report more extreme probabilities, exaggerating toward zero or one. We consider a competitive forecaster's best response to truthful reporting and also investigate equilibrium reporting functions in the case where another forecaster also cares about relative performance. We show how a decision maker can revise probabilities of an event after receiving reported probabilities from competitive forecasters and note that the strategy of exaggerating probabilities can make well-calibrated forecasters (and a decision maker who takes their reported probabilities at face value) appear to be overconfident. However, a decision maker who adjusts appropriately for the misrepresentation of probabilities by one or more forecasters can still be well calibrated. Finally, to try to overcome the forecasters' competitive instincts and induce cooperative behavior, we develop the notion of joint scoring rules based on business sharing and show that these scoring rules are strictly proper.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.