Abstract

In tests of the relative performance evaluation (RPE) hypothesis, empiricists rarely aggregate peer performance in the same way as a firm’s board of directors. Framed as a standard errors-in-variables problem, a commonly held view is that such aggregation errors attenuate the regression coefficient on systematic firm performance towards zero, which creates a bias in favor of the strong-form RPE hypothesis. In contrast, we analytically demonstrate that aggregation differences generate more complicated summarization errors, which create a bias against finding support for strong-form RPE (potentially inducing a Type-II error). Using simulation methods, we demonstrate the sensitivity of empirical inferences to the bias by showing how an empiricist can conclude erroneously that boards, on average, do not apply RPE, simply by selecting more, fewer, or different peers than the board does. We also show that when the board does not apply RPE, empiricists will not find support for RPE (that is, precluding a Type-I error).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.