Abstract
We consider a prototypical representative-agent forward-looking model, and study the low frequency variability of the data when the agent's beliefs about the model are updated through linear learning algorithms. We nd that learning in this context can generate strong persistence. The degree of persistence depends on the weights agents place on past observations when they update their beliefs, and on the magnitude of the feedback from expectations to the endogenous variable. When the learning algorithm is recursive least squares, long memory arises when the coefficient on expectations is sufficiently large. In algorithms with discounting, long memory provides a very good approximation to the low-frequency variability of the data. Hence long memory arises endogenously, due to the self-referential nature of the model, without any persistence in the exogenous shocks. This is distinctly dierent from the case of rational expectations, where the memory of the endogenous variable is determined exogenously. Finally, this property of learning is used to shed light on some well-known empirical puzzles.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.