Abstract
One of the problems facing policymakers is that recent releases of data are liable to subsequent revisions. This paper discusses how to deal with this, and is in two parts. In the normative part of the paper, we study the design of monetary policy rules in a model that has the feature that data uncertainty varies according to the vintage. We show how coefficients on lagged variables in optimised simple rules for monetary policy increase as the relative measurement error in early vintages of data increases. We also explore scenarios when policymakers are uncertain by how much measurement error in new data exceeds that in old data. An optimal policy can then be one in which it is better to assume that the ratio of measurement error in new compared to old data is larger, rather than smaller. In the positive part of the paper, we show that the response of monetary policy to vintage varying data uncertainty may generate evidence of apparent interest rate smoothing in interest rate reaction functions: but we suggest that it may not generate enough to account for what has been observed in the data.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.