Abstract
I study a class of continuous-time games of learning and imperfect monitoring. A long-run player and a market share a common prior about the initial value of a Gaussian hidden state, and learn about its subsequent values by observing a noisy public signal. The long-run player can nevertheless control the evolution of this signal, and thus affect the market’s belief. The public signal has an additive structure, and noise is Brownian. I derive conditions for an ordinary differential equation to characterize equilibrium behavior in which the long-run player’s actions depend on the history of the game only through the market’s correct belief. Using these conditions, I demonstrate the existence of pure-strategy equilibria in Markov strategies for settings in which the long-run player’s flow utility is nonlinear. The central finding is a learning-driven ratchet principle affecting incentives. I illustrate the economic implications of this principle in applications to monetary policy, earnings management, and career concerns.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.