Abstract

The decision maker receives signals imperfectly correlated with an unobservable state variable and must take actions whose payoffs depend on the state. The state randomly changes over time. In this environment, we examine the performance of simple linear updating rules relative to Bayesian learning. We show that a range of parameters exists for which linear learning results in exactly the same decisions as Bayesian learning, although not in the same beliefs. Outside this parameter range, we use simulations to demonstrate that the consumption level attainable under the optimal linear rule is virtually indistinguishable from the one attainable under Bayes’ rule, although the respective decisions will not always be identical. These results suggest that simple rules of thumb can have an advantage over Bayesian updating when more complex calculations are more costly to perform than less complex ones. We demonstrate the implications of such an advantage in an evolutionary model where agents “learn to learn.”

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.