Abstract
The monetary policy rules that are widely discussed--notably the Taylor rule--are remarkable for their simplicity. One reason for the apparant preference for simple ad hoc rules over optimal rules might be the assumption of full information maintained in the computation of an optimal rule. Arguably this makes optimal control rules less robust to model specification errors. In this paper, we drop the full-information assumption and investigate the choice of policy rules when agents must learn the rule that is in use. To do this, we conduct stochastic simulations on a small, estimated forward-looking model, with agents following a strategy of least- squares learning or discounted least-squares learning. We find that the costs of learning a new rule can, under some circumstances, be substantial. These circumstances vary with the preferences of the monetary authority and with the rule initially in place. Policymakers with strong preferences for inflation control must incur substantial costs when they change the rule; but they are nearly always willing to bear those costs. Policymakers with weak preferences for inflation control, on the other hand, may actually benefit from agents' prior belief that a strong rule is in place.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.