Crises in Economic Thought, Secular Stagnation, and Future Economic Research

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Crises in Economic Thought, Secular Stagnation, and Future Economic Research

Similar Papers
  • Research Article
  • Cite Count Icon 8
  • 10.1086/596001
Reflections on Monetary Policy in the Open Economy
  • May 1, 2009
  • NBER International Seminar on Macroeconomics
  • Richard H Clarida

Aperennial topic of discussion among scholars and policymakers is how best to think about a benchmark for macroeconomics as it applies to monetary policy. Should the benchmark for policy analysis be the open economy with international interest rate linkages and flexible exchange rates (after all, major economies are in fact open with flexible exchange rates), or should it be the closed economy in which such linkages and exchange rate adjustments are assumed away? Of course, few if any policy makers would seek to guide policy by ignoring capital flows and exchange rates, but in many cases it appears as though the starting point for analysis is the closed‐economy macro model, these days a variant of the dynamic new Keynesian model. Those who start from a closed‐economy framework often have questions about how “openness” influences the analysis. How does the neutral real interest depend on “global” developments? Is the Phillips curve trade‐off between inflation and domestic output better or worse in the open versus the closed economy? Is “potential GDP” a function of global developments, or only of domestic resources available and domestic productivity? Perhaps most important, how—if at all—does openness influence the optimalmonetary policy rule? Is a Taylor rule the rightmonetary policy for an open economy? In 2002 Jordi Gali, Mark Gertler, and I published a paper in the Journal of Monetary Economics that developed a benchmark (at least in ourway of thinking) dynamic two‐country optimizing macro model of optimal monetary policies in the open economy. Our focus in that paper was deriving optimal policy rules in the two‐country model and assessing the gains from international monetary policy cooperation. In that paper, we emphasized the following implications of the model:

  • Research Article
  • 10.1086/690248
Comment
  • Jan 1, 2017
  • NBER Macroeconomics Annual
  • Harald Uhlig

Comment

  • Research Article
  • Cite Count Icon 3
  • 10.1086/696069
Distortions in Macroeconomics
  • Apr 1, 2018
  • NBER Macroeconomics Annual
  • Olivier Blanchard

Previous article FreeDistortions in MacroeconomicsOlivier BlanchardOlivier BlanchardPeterson Institute for International Economics and NBER Search for more articles by this author Peterson Institute for International Economics and NBERPDFPDF PLUSFull Text Add to favoritesDownload CitationTrack CitationsPermissionsReprints Share onFacebookTwitterLinked InRedditEmailQR Code SectionsMoreAfter-dinner talks are the right places to test tentative ideas, hoping for the indulgence of the audience. Mine will be in that spirit, and reflect my thoughts on what I see as a central macroeconomic question: What are the distortions that are central to understanding short-run macroeconomic evolutions?I shall argue that, over the past 30 years, macroeconomics had, to an unhealthy extent, focused on a one-distortion (nominal rigidities), one-instrument (policy rate) view of the macro economy. As useful as the body of research that came out of this approach was, it was too reductive, and proved inadequate when the Great Financial Crisis came. We need, even in our simplest models, to take into account more distortions. Having stated the general argument, I shall turn to a specific example and show how this richer approach modifies the way we should think about policy responses to the low neutral interest rates we observe in advanced economies today.Let me develop this theme in more detail.Back in my student days, that is, the mid-1970s, much of macroeconomic research was focused on building larger and larger macroeconometric models based on the integration of many partial equilibrium parts. Some researchers worked on explaining consumption, others on explaining investment, or asset demands, or price and wage setting. The empirical work was motivated by theoretical models, but these models were taken as guides rather than as tight constraints on the data. The estimated pieces were then put together in larger models. The behavior captured in the estimated equations reflected in some ways both optimization and distortions, but the mapping was left, it was felt by necessity, implicit and somewhat vague. (I do not remember hearing the word “distortions” used in macro until the 1980s.)These large models were major achievements. But, for various reasons, researchers became disenchanted with them. Part of it was obscurity: the parts were reasonably clear, but the sum of the parts often had strange properties. Part of it was methodology: identification of many equations was doubtful. Part of it was poor performance: the models did not do well during the oil crises of the 1970s. The result of disappointment was a desire to go back to basics.For my generation of students, three papers played a central role. One was the paper by Robert Lucas (1973) on imperfect information. The other two were the papers by Stanley Fischer (1977) and by John Taylor (1980) on nominal rigidities. While the approaches were different, the methodology was similar: the focus was on the effects of one distortion—imperfect information leading to incomplete nominal adjustment in the case of Lucas, and explicit nominal rigidities, without staggering of decisions in Fischer, with staggering of decisions in Taylor. All other complications were cast aside to focus on the issue at hand, the role of nominal rigidities and the implied nonneutrality of money.Inspired by these models, further work then clarified the role of monopolistic competition, the role of menu costs, and the role of different staggering structures, showing how each of them shaped the dynamic effects of nominal shocks. The natural next step was the re-integration of these nominal rigidities in a richer, microfounded, general equilibrium model. The real business cycle model, developed by Kydland and Prescott (1982), provided the simplest and most convenient environment. Thus was born the New Keynesian (NK) model, a slightly odd marriage of the most neoclassical model and an ad hoc distortion. But it was a marriage that has held together to this day.In the hands of researchers like Woodford (2003) or Clarida, Gali, and Gertler (1999), the model provided the basis, or at least the intellectual support, for the development of a new approach to monetary policy, that is, inflation targeting, an approach adopted by most central banks around the world. It had a rich set of implications, with their origin deriving from the basic conceptual structure: one distortion, that is, nominal rigidities in some form (often the convenient Poisson form derived by Calvo), and one instrument, the nominal policy rate. The right use of the instrument could largely undo the distortion. Maintaining constant and low inflation would both minimize distortions and lead to the right level of output, a proposition Jordi Gali and I baptized, tongue in cheek, the Divine Coincidence (Blanchard and Gali 2007).What I have described is obviously a caricature. First, but this is minor, there had to be at least another distortion: to talk about price setting, firms had to have some pricing power, and this led to a monopoly markup. Under Dixit-Stiglitz constant elasticity assumptions, the markup, however, was constant, and the effects of the distortion were largely irrelevant with respect to the effects of monetary policy. Second, some models had more than one nominal rigidity, for example, rigidities in both wage and price setting as in Erceg, Henderson, and Levin (2000); some models combined real and nominal rigidities, for example, in my work with Gali (Blanchard and Gali 2007). Third, there was important work on credit (e.g., Bernanke and Gertler 1989) and on liquidity (e.g., Diamond and Dybvig 1983; Holmstrom and Tirole 1998). But, while these papers were well known and some of these mechanisms were integrated in DSGE models, they did not become part of the basic model. (I remember telling Bengt that, while I admired his work on liquidity with Jean, I was not sure how central it was to macro.) Stable and low inflation as the target, and the use of the policy rate as the instrument, remained the basic approach to policy.Even before the Great Financial Crisis, I felt some unease with two characteristics of the basic model and its larger DSGE cousins (Blanchard 2009). The first was that the deep reasons behind nominal rigidities, such as the costs of collecting information or of taking decisions were probably relevant beyond price or wage setting, and thus were relevant for consumption, investment, and portfolio choices, with important but neglected implications for macroeconomic dynamics. The second that the models assumed much too much forward lookingness to agents. When combined with rational expectations, the implications of the Euler equation for consumption, or the interest parity condition for exchange rates, were simply counterfactual.The financial crisis then made it clear that the basic model, and even its DSGE cousins, had other serious problems, that the financial sector was much more central to macroeconomics than had been assumed. Financial markets were incomplete, raising issues of solvency and liquidity. The role and the importance of debt were central to understanding credit booms and busts. Bank runs were not just a historical footnote, but an essential aspect of maturity transformation. These distortions were at the core of the crisis; nominal rigidities may have made it worse, but even absent nominal rigidities, the financial crisis would likely have led to a large decrease in output.Since the start of the crisis, DSGE models have been extended to allow for a richer financial sector and integrate some of these distortions (e.g., Gertler and Kiyotaki 2013). But I feel we still do not have the right core model. Put another way, suppose that we were building a small macroeconomic model from scratch. What are, say, the three distortions we would deem essential to have in such a model, and, by implication, to have as the core of any DSGE model? What model should we teach at the start of the first-year graduate course?1I do not have the answer, but I have a few ideas. This is where my talk becomes even more tentative.My first distortion would remain nominal rigidities. As much as I try, I just cannot interpret macroeconomic evolutions without relying on nominal rigidities. Proof of their relevance is in the ability of central banks to maintain their desired nominal and real interest rates over long periods of time, or in the dramatically different behavior of real exchange rates under fixed and flexible exchange rate systems (Mussa 1986).My second distortion would be finite horizons. Not so much the finiteness that comes from death and the absence of operative bequest motives, but the finite horizon that comes from bounded rationality, from myopia, from the inability to think too far into the future.My third distortion would be in the role of own funds in spending decisions, whether it is capital for banks, or capital or collateral for firms or people. While it was only one of many distortions at play in the financial crisis, it can explain much of what happened, and how shocks affect financial intermediation.How I would actually put them together in a basic model is a much harder question, the difference between a dinner talk and a serious paper. We have off-the-shelf formalizations for nominal rigidities, for myopia, for capital constraints; for example Calvo for the first, Gabaix (2016) for the second, and Holmstrom and Tirole (1997) for the third.2 Each of them has its strengths and weaknesses, and whether they fit together conceptually is not obvious. (On this, I like the remarks by Cochrane [2016] on the potential misuse of the Gabaix formalization of myopia.) In thinking about how to combine these or other formalizations, I still struggle between keeping strictly to microfoundations or writing plausible characterizations more faithful to the empirical evidence, but more loosely connected to theory (this is the old discussion between the pros and cons of the IS-LM versus the NK model, and whether there is a middle way). But this is a separate set of methodological issues, which I shall leave aside here.The Low Real Safe Rate and Macroeconomic PolicyFor better or for worse, simple conceptual frames such as the NK model strongly shape and limit our thinking. With the above discussion in mind, let me take an example, namely the potential policy implications of the very low level of the policy rate needed to maintain output at potential, the so-called neutral rate.3Nearly all the discussion about policy implications has focused on monetary policy. In the one-distortion, one-instrument view of the economy, so long as the policy rate does not hit the zero lower bound, the low neutral rate does not pose a particular problem: the central bank should simply choose a policy rate consistent with this low neutral rate. At the zero lower bound (or, to the extent that we now know that policy rates can be at least slightly negative, the “effective lower bound”), the issue becomes the degree to which financial assets are imperfect substitutes, and how the policy tool kit must be extended to allow for purchases of specific assets. This is indeed how, for the most part, both the policy discussion and policy actions have unfolded.Figure 1 suggests, however, that the discussion should be more ambitious. It shows the evolution of the one-year real rate (constructed as the difference between the one-year Treasury rate minus the corresponding CBO forecast of inflation) and the real growth rate in the United States since 1980. The one-year real rate has indeed come down since the early 1980s. And, interestingly, it is now substantially below the growth rate, and expected to be below it for the foreseable future. This raises two interesting possibilities:The first is that the low policy rate reflects a low marginal product of capital, and that the US economy has become dynamically inefficient. This could be the case if, for example, consumers had finite horizons, either for physical reasons as in the overlapping generation model (should we call death a distortion?) or because of bounded rationality. If this were the case, the right policy tool would not be monetary policy, but rather policies aimed at decreasing saving. The right focus should be on fiscal policy. The right policy would be to increase public debt, and such a policy could be Pareto improving.As exciting as this possibility would be, it does not appear, however, that this is the right explanation for the low safe rate. What matters for dynamic inefficiency is not the relation between the safe rate and the growth rate, but between the marginal product of capital and the growth rate. And the empirical evidence on the marginal product is that it has remained much higher than the growth rate.This leads to the second hypothesis. That the difference between the marginal product and the safe rate has increased, leading to a low safe rate for a given marginal product. Put another way, it points to a large liquidity or risk premium. This in turn leads to a focus on the factors behind the premium, and the role of distortions in financial markets. Thinking of the premium as a risk premium, it takes us back to the equity premium puzzle identified by Mehra and Prescott (1985), and the various tentative resolutions to the puzzle. Thinking of the premium as a liquidity premium, it takes us to what is behind the demand for safe assets, along the lines of Caballero and Farhi (2014). It leads us to think about the role of financial regulations, and thus the role of regulatory policy. And if the high premium reflects, at least in part, distortions, the focus should then be on both fiscal and financial policies. If, for example, the safe rate is going to remain below the marginal product of capital, this implies that the government can borrow, never repay the debt, and still maintain a stable debt-to-GDP ratio. Should it do it? The fact that it can does not mean that it should. Or, to the extent that various distortions are behind the premium, should it instead remove them, even if this means a higher safer rate, and thus a higher cost of public borrowing?My intention here was not to give the answers, but to show how much a richer view of the relevant distortions leads to a richer discussion of policy. To repeat and conclude: We must move from a dominant “one distortion/one instrument” to a “many distortions/many instruments” view of the economy. In doing so, the way we think about the economy, and about the appropriate policies, will be much more fertile.EndnotesTalk, NBER Macroeconomics Annual Conference, April 2017. I thank Marty Eichenbaum, Jonathan Parker, and Adam Posen for comments. For acknowledgments, sources of research support, and disclosure of the author’s material financial relationships, if any, please see http://www.nber.org/chapters/c13955.ack.1. This may be a hopeless and misguided search. Maybe even the simplest characterization of fluctuations requires many more distortions. Maybe different distortions are important at different times. Maybe there is no simple model … I keep faith that there is.2. A fascinating question is why the Euler equation fails. One hypothesis is because of bounded rationality, for example, á la Gabaix. Another is because of borrowing constraints, for example, á la McKay, Nakamura, and Steinsson (2016). The answer is probably both. Interestingly, both lead, at least to a close approximation, to a similar modified Euler equation.3. After giving the talk, I was made aware of an article by Davig and Gurkaynak (2015) that has a closely related theme.ReferencesBernanke, Ben, and Mark Gertler. 1989. “Agency Costs, Net Worth, and Business Fluctuations.” American Economic Review 79 (1): 14–31.First citation in articleGoogle ScholarBlanchard, Olivier. 2009. “The State of Macro.” Annual Review of Economics 1:209–28.First citation in articleCrossrefGoogle ScholarBlanchard, Olivier, and Jordi Gali. 2007. “Real Rigidities and the New Keynesian Model.” Journal of Money, Credit, and Banking 39 (1): 35–66.First citation in articleCrossrefGoogle ScholarCaballero, Ricardo, and Emmanuel Farhi. 2014. “The Safety Trap.” NBER Working Paper no. 19927, Cambridge, MA.First citation in articleGoogle ScholarClarida, Richard, Jordi Gali, and Mark Gertler. 1999. “The Science of Monetary Policy: A New Keynesian Perspective.” Journal of Economic Literature 37:1661–707.First citation in articleCrossrefGoogle ScholarCochrane, John. 2016. “Comments on a Behavioral New-Keynesian Model by Xavier Gabaix.” Working Paper, University of Chicago.First citation in articleGoogle ScholarDavig, Troy, and Refet Gurkaynak. 2015. “Is Optimal Monetary Policy Always Optimal?” International Journal of Central Banking 11 (S1): 353–82.First citation in articleGoogle ScholarDiamond, Douglas, and Philip Dybvig. 1983. “Bank Runs, Deposit Insurance, and Liquidity.” Journal of Political Economy 91 (3): 401–19.First citation in articleLinkGoogle ScholarErceg, Christopher, Dale Henderson, and Andrew Levin. 2000. “Optimal Monetary Policy with Staggered Wage and Price Contracts.” Journal of Monetary Economics 46 (2): 281–313.First citation in articleCrossrefGoogle ScholarFischer, Stanley. 1977. “Long-Term Contracts, Rational Expectations and the Optimal Money Supply Rule.” Journal of Political Economy 85:191–205.First citation in articleLinkGoogle ScholarGabaix, Xavier. 2016. “A Behavioral New Keynesian Model.” NBER Working Paper no. 22954, Cambridge, MA.First citation in articleGoogle ScholarGertler, Mark, and Nobuhiro Kiyotaki. 2013. “Banking, Liquidity, and Bank Runs in an Infinite-Horizon Economy.” NBER Working Paper no. 19129, Cambridge, MA.First citation in articleGoogle ScholarHolmström, Bengt, and Jean Tirole. 1997. “Financial Intermediation, Loanable Funds, and the Real Sector.” Quarterly Journal of Economics 112 (3): 663–91.First citation in articleCrossrefGoogle Scholar———. 1998. “Private and Public Supply of Liquidity.” Journal of Political Economy 106 (1): 1–40.First citation in articleLinkGoogle ScholarKydland, Finn, and Edward Prescott. 1982. “Time to Build and Aggregate Fluctuations.” Econometrica 50:1345–70.First citation in articleCrossrefGoogle ScholarLucas, Robert. 1973. “Some International Evidence on the Output-Inflation Trade-Off.” American Economic Review 63 (3): 326–34.First citation in articleGoogle ScholarMcKay, Alisdair, Emi Nakamura, and Jon Steinsson. 2016. “The Power of Forward Guidance Revisited.” American Economic Review 106 (10): 3133–58.First citation in articleCrossrefGoogle ScholarMehra, Rajnish, and Edward Prescott. 1985. “The Equity Premium.” Journal of Monetary Economics 15:145–61.First citation in articleCrossrefGoogle ScholarMussa, Michael. 1986. “Nominal Exchange Rate Regimes and the Behavior of Real Exchange Rates: Evidence and Implications.” Carnegie-Rochester Conference Series on Public Policy 25:117–214.First citation in articleCrossrefGoogle ScholarTaylor, John. 1980. “Aggregate Dynamics and Staggered Contracts.” Journal of Political Economy 88 (1): 1–24.First citation in articleLinkGoogle ScholarWoodford, Michael. 2003. Interest and Prices: Foundations of a Theory of Monetary Policy. Princeton, NJ: Princeton University Press.First citation in articleGoogle Scholar Previous article DetailsFiguresReferencesCited by NBER Macroeconomics Annual Volume 322017 Sponsored by the National Bureau of Economic Research (NBER) Article DOIhttps://doi.org/10.1086/696069 © 2018 by the National Bureau of Economic Research. All rights reserved.PDF download Crossref reports no articles citing this article.

  • Research Article
  • 10.22055/jqe.2019.28357.2025
بررسی شمول سیاستگذاری پولی با مقوله ثبات مالی در اقتصاد ایران با استفاده از الگوی DSGE
  • Nov 9, 2019
  • پدرام داودی + 1 more

Price stability and sustainable economic growth are conventionally considered as key goals of monetary policy. Financial stability is also recognized as the third pillar in the monetary policy objective function after the financial crisis of 2007. Although financial stability “as the third target in the monetary policy objective functions” is evidently inconsistent with the twin conventional monetary policy goals, it mitigates the side effects of financial turmoil impact on the price and growth instability in the macrocosmic environment in the medium term. Financial crises, which have historically created large deviations in the monetary policy goals, necessitate empowering the conventional policy instruments (policy interest rate, monetary aggregate and rate of requirement ratio) with unconventional policy instruments. In this context, unconventional supplementary monetary policy instruments streamline monetary transmission mechanism to achieve asymmetrically triple monetary policy goals through expanding open market operations to non-governmental bonds, facilitating banks’ overnight financing in the payment system, and initiating zero bound interest rate policy. In this research, a Dynamic Stochastic General Equilibrium Model (DSGE-Gertler and Karadi, 2011) is technically utilized to estimate the impact of conventional (interest rate) and unconventional (credit lines) monetary policy instruments on the macroeconomic variables (inflation, output growth, exchange rate and stock market price index), while simulating the macroeconomic variables response to financial instability. The simulation evaluates monetary policy impulse response function based on optimization approach in the context of crisis scenario. Monetary policy rules basically assessed in this paper are introduced in the context of optimization and non-optimization, which include Taylor interest rate rule without financial stability, simple optimization interest rate rule with financial stability, and unconventional monetary policy rule. In this context, Central banks’ line of credits as unconventional tool, which is influenced by the policy maker decisions, injects directly to banking network flow of funds. Central banks, which had sold the public bonds to the families in the form of risk-free investment in the first step, accumulate financial resources in the balance sheet. Accumulated financial resources lend simultaneously to the firms in the second step in the context of unconventional expansionary monetary policy in order to increase banking network leverage ratio, which streamlines credit operations and develops private sector investment. Presumably, central bank intervention is empirically considered inefficient compared to the private sector in the financial intermediaries due to CBs cost inefficiency to find and allocate to the key economic sectors. The DSGE parameters are statistically estimated by the Bayesian approach through using time series for some macroeconomic variables including consumption, private investment, inflation, government expenditure, change in outstanding loan, commercial banks leverage ratio, and stock market return. Given the fact that the Bayesian estimation is technically required to introduce the distribution of parameters as priors, priors are determined through numerical analysis as well as through previous research. The estimation log data density mounted at about 399 and the robustness of estimated parameters has been verified based on test of Brooks and Gelman (1988). In this study, rapid reduction in the quality of capital is considered financial crises shock indicator which influence key macroeconomic variables. Simulation results indicate that unconventional monetary policy affects efficiently real sector sustainability while mitigating financial instability (assets market) in the macroeconomic environment. In this regard, financial stability is evidently accompanied by the lower nominal interest rate and inflation in line with Gertler and Karadi (2011). In other words, although unconventional rather than conventional monetary policy instruments were limitedly utilized amid financial turmoil in Iranian economy, they create sustainable growth along with lower interest rate and inflation in the medium term accompanied by higher household welfare. Utilization of unconventional monetary policy instruments diversifies policy tools and reduces the deviation of conventional policy instruments and target variables (price, output growth and financial stability) in the Iran macroeconomic environment.

  • Research Article
  • 10.1086/658315
Comment
  • Mar 1, 2011
  • NBER International Seminar on Macroeconomics
  • Hans-Helmut Kotz

Comment

  • Research Article
  • 10.1086/594136
Comment
  • Jan 1, 2008
  • NBER Macroeconomics Annual
  • Bennett T Mccallum

Previous articleNext article FreeCommentBennett T. McCallumBennett T. McCallumCarnegie Mellon University and NBER Search for more articles by this author Carnegie Mellon University and NBERPDFPDF PLUSFull Text Add to favoritesDownload CitationTrack CitationsPermissionsReprints Share onFacebookTwitterLinked InRedditEmailQR Code SectionsMoreI. IntroductionThis is an interesting and challenging paper, in which Atkeson and Kehoe put forth a very strong critique of current mainstream monetary policy analysis. Monetary economists have, of course, been rather pleased with the development of their subject over the past 10–15 years, current U.S. policy difficulties notwithstanding. Indeed, the tone of a prominent recent expository paper by my colleague, Marvin Goodfriend, is somewhat triumphal in spirit.1 The spirit of the Atkeson and Kehoe paper, by contrast, is conveyed by a recent publication of theirs, together with coauthor Fernando Alvarez, which bears the title “If Exchange Rates Are Random Walks, Then Almost Everything We Say about Monetary Policy Is Wrong” (Alvarez, Atkeson, and Kehoe 2007). That paper focuses on exchange rate failures, whereas the current one stresses the term structure of interest rates, but the line of argument is basically the same.The title of the 2007 paper leads me rather naturally to ask myself what it is that I would say in answer to the implied question, “What important things do monetary economists really know—or at least believe—about monetary policy?” My own answer to that question would go along the following lines: (i) We believe that if the monetary authority keeps monetary policy expansionary for a substantial length of time, the main effect will be to generate a higher inflation rate than would have prevailed otherwise, with little or no overall effect on aggregate production and employment. (ii) Nominal interest rates will be higher, also, with real rates being affected very little. (iii) If, however, the monetary authority changes policy unexpectedly and abruptly in an expansionary direction, there will most likely be an expansion in aggregate output and employment—but it will be only temporary. (iv) If these changes are in the direction of tighter policy, the signs of the above‐mentioned effects will be reversed. (v) In particular, the monetary authority has the power to generate a recession, in which output and then the inflation rate will fall. (vi) The precise nature of the mechanism that generates the real effects of monetary policy changes of this type is not very well understood. Then, if my questioner had not wandered away in boredom, I would want to add something like the following: (vii) The foregoing points refer to an expansionary or contractionary monetary policy stance—loose or tight—but how is this measured? Well, a sustained high growth rate of the stock of base money will (under most institutional arrangements) be expansionary, but matters are a little less clear‐cut when the central bank actually carries out its policy by manipulating overnight interest rates. Nevertheless, there are ways in which we can characterize tighter versus looser policy in terms of interest rate rules by reference to the implied target inflation rate, the strength of responses to deviations from target, and so forth.Now, I suspect that Atkeson and Kehoe probably do not disagree with most of these statements as to what monetary economists know (or believe), even on a substantive basis.2 But their title of the current paper, as distinct from the 2007 item, refers to a need for a new approach to monetary policy analysis. So let us turn to a consideration of what today’s mainstream approach is. As it happens there is a short statement of that type, in a paper of mine, that gives the following description. The approach is one in which “the researcher specifies a quantitative macroeconomic model that is intended to be structural (invariant to policy changes) and consistent with both theory and data. Then, by stochastic simulation or analytical means, he determines how crucial variables (such as inflation and the output gap) behave on average under various alternative policy rules. Usually, rational expectations (RE) is assumed in both stages. Evaluation of the different outcomes can be accomplished by means of an optimal control exercise, or by reference to an explicit loss function, or left to the judgment (i.e., loss function) of the implied policymaker” (McCallum 2001, 258). Here, too, I doubt that Atkeson and Kehoe have any major disagreement with this general approach. What they do disagree with, if I understand at all, is the model that is typically used in recent work and taken to be structural.3In a sense my last statement could be regarded as merely quibbling over their title. But the point seems to be one of some importance: if Atkeson and Kehoe can generate an optimizing model that incorporates reliable, quantitative estimates reflecting time‐varying “risk” (i.e., state‐dependent variances and covariances) and endogenously explains inflation and output fluctuations, then monetary economists would presumably be happy to incorporate such features in their models—and would not consider this to reflect any basically new approach. Be that as it may, in what follows I will briefly review their featured empirical regularities, discuss issues concerning their suggested modeling strategy, and provide a brief conclusion.1See “How the World Achieved Consensus on Monetary Policy” (Goodfriend 2007).2They would probably grumble, justifiably, about the vagueness of point vii.3McCallum (2001, 258) goes on to say: “There is also considerable agreement about the general, broad structure of the macroeconomic model to be used.” Atkeson and Kehoe clearly would not share in this agreement.II. Empirical RegularitiesAtkeson and Kehoe begin, in Section I, with “four key regularities regarding the dynamics of interest rates and risk that we use to guide our construction” of a model and its pricing kernel. The first two pertain to a principal components analysis of a collection of interest rates, specifically, a 3‐month T‐bill rate and zero‐coupon yields on U.S. Treasury securities with k‐year maturities for $$k=1,$$ 2, …, 13. Time series observations are monthly over 1946.12–2007.12. The first regularity is that “the first principal component accounts for over 90% of the variance of the short rate [i.e., the 3‐month rate].” The second regularity is that “the second principal component is very similar to the yield spread between the short rate and the long [i.e., 13‐year] rate.” Having demonstrated these facts—and also that the first component is correlated even more strongly with the long rate—the authors henceforth use just the short and long rates.More substantively (and more questionably), the third and fourth regularities pertain to expected excess returns in the context of term structure and international exchange rate contexts. Specifically, movements in yield spreads and exchange rate premia are “associated with movements in risk.” The way in which these regularities might be regarded by some readers as questionable is that, in many studies, “risk” is operationally the name that is given to differentials in expected returns that the analyst’s model is not able to explain.Later in the paper, in Section V.A, Atkeson and Kehoe plot short‐rate and long‐rate time series for the United States over an extended period from 1836 through 2007. In addition, they include analogous plots for the United Kingdom, France, Germany, and the Netherlands. In all of these, the fluctuations of the long rate represent “a much smaller fraction of overall fluctuations in the short rate than they are in the postwar period.” Thus, they state: “A central question in the analysis of monetary policy at the secular level then is, What institutional changes led to this pattern?” In the preliminary version of this comment, I responded to a more pointed and strongly emphasized version of this query by stating that, to me, it is no surprise that expectations of future interest rates became unanchored during the post–World War II period, because, to again quote myself,[the] collapse of the Bretton Woods system created, for the first time in history, a situation in which the world’s leading central banks were responsible for conducting monetary policy without an externally imposed monetary standard (often termed a “nominal anchor”). Previously, central banks had normally operated under the constraint of some metallic standard (e.g., a gold or silver standard), with wartime departures being understood to be temporary, i.e., of limited duration. Some readers might not think of the Bretton Woods system as one incorporating a metallic standard, but by design it certainly was, since the values of all other currencies were pegged to the U.S. dollar and the latter was pegged to gold at $35 per ounce. (McCallum 1999, 175–76)All in all, it seems that there is no difficulty in understanding why an altered monetary policy regime generated different expectations regarding inflation and therefore future short interest rates in the post–World War II era. The variability in long rates during the 1960s developed as market participants began to see that the United States was not going to be bound by its commitment to maintain the $35 per ounce price of gold. Then the variability jumps up around the time of the Bretton Woods collapse in 1971—see Atkeson and Kehoe’s figures 6A–6E—and continues to rise into the Volcker disinflation that was painful (with extremely high nominal interest rates) but that ultimately succeeded in restoring some semblance of a nominal anchor.What about the return to stability that may have occurred around 1990? That year is, of course, the year in which the first central bank (New Zealand) officially adopted a monetary policy regime of “inflation targeting” (IT). At that time, this was taken to mean a policy whose only objective was a low and stable inflation rate. Since then, the IT term has come to be applied to regimes that give more weight to output/employment stabilization, but most monetary economists understand it as continuing to emphasize, as the primary goal, inflation control. So again the timing is about right for the possible recovery of anchored expectations that the first empirical regularity is said to reflect.To this general line of argument, Atkeson and Kehoe object: “But this answer is, at best, superficial. In the prewar era, countries chose to be on the gold standard most of the time and chose to leave it when it suited their purposes. Thus, the relevant questions are, rather, What deeper forces led agents to have confidence that their governments would choose stable policy over the long term? And what forces led them to lose this confidence after World War II? Only if we can quantitatively account for this history can we give advice on how to avoid another great inflation.”In this regard it must be said that I consider an explanation of the evolution of beliefs regarding the monetary standard, held by citizens of the United States, Great Britain, Germany, and so forth, to be somewhat beyond the scope of monetary policy analysts. To think about this issue, one must recognize that historically “the gold standard” required not just that the monetary authority would stand ready to exchange gold and currency at a specified rate but also that this rate should be unchanged “forever.” That arrangement made it such that severe inflation would not occur—even the major historical gold discoveries did not generate sustained inflation on the order of 10% per year—but it did generate more cyclical instability of real variables than we have had in the postwar era. Could policy of that type win popular support in today’s environment in the United States? If not, which would be my answer, then we need an entire unified social science to provide an explanation at “a deeper level.” And such an explanation—which would need to emphasize enormous developments in the media, extensions of suffrage, evolution of religious beliefs, attitudes toward the role of government, and so on—would not be of much help to central bankers. Let us turn then to monetary policy analysis considered more narrowly.III. Basic AnalysisThe heart of Atkeson and Kehoe’s paper is a recommended response to the third and fourth of the regularities mentioned above, that is, that measured excess returns on multiperiod bonds fluctuate strongly with yield spreads for bonds of different maturities and for international exchange rates. These regularities are translated by Atkeson and Kehoe into an argument that the consumption Euler equation, some version of which (often termed an expectational IS equation) is one basic ingredient of current macro‐monetary models, performs very poorly empirically. This is, of course, true for the simplest versions, but that problem has been widely recognized by monetary economists. A nice overview of empirical weaknesses of so‐called New Keynesian models was provided some years ago in a working paper by Richard Dennis (2003), which is briefly and nontechnically summarized in Dennis (2004). (The weaknesses discussed there relate to the Calvo‐style price adjustment relation, as well as the consumption Euler equation.) Dennis distinguishes between the bare‐bones “canonical model” and a “hybrid” version that adds habit formation in consumption behavior to the basic consumption‐saving relationship and also adds a somewhat dubious dependence on lagged inflation to the basic Calvo price adjustment relation. He recognizes, following Estrella and Fuhrer (2002), that “the problem with the canonical model is that the behavior of output, consumption, prices, and interest rates suggested by the model are fundamentally at odds with observed data” (Dennis 2004, 1). The hybrid model performs better, in terms of matching quarterly data, but “there are a number of areas where the hybrid model’s responses differ importantly from” impulse responses of an identified vector autoregression (VAR; Dennis 2004, 3).The point here is that monetary economists are quite aware that current models, even with elaborations of the type utilized by Christiano, Eichenbaum, and Evans (2005) or Smets and Wouters (2007), have empirical weaknesses, and they have been active in trying to eliminate these problems by improved specification. One pertinent and recent example concerns the discouraging results reported by Canzoneri, Cumby, and Diba (2007), that is, that inclusion of habit formation in consumption behavior unrealistically increases the variability of interest rates.4 Subsequent results by Collard and Dellas (2007) indicate, however, that this deterioration obtains when the household utility function is taken to be additively separable in consumption and leisure. If instead consumption and leisure enter the function in a Cobb‐Douglas manner, then inclusion of habit results in an improved—not worsened—match of the model’s interest rate variability to that of the data.I might also remark that Atkeson and Kehoe’s way of considering the empirical failure of the Euler equation seems questionable. Specifically, they discuss the relationship in a manner that would be appropriate if the role of this equation were to explain movements in nominal interest rates of various maturities. In fact, however, the role of this equation in standard monetary policy models is to explain consumption in response to (real) interest rates and expected future consumption (and, in habit specifications, lagged consumption). No mention of the adequacy or inadequacy of the standard model’s properties with regard to consumption is provided.5Be that as it may, it is essential to consider the analytical heart of Atkeson and Kehoe's paper, which is their presentation of “a simple model of the pricing kernel that is consistent with these [observed] dynamics” pertaining to interest rates. For the one‐period nominal interest rate, it in their notation, the pricing kernel mt+1 is an unobservable random variable that is generated by a stochastic process such that the interest rate it can be determined by a relation of the form $$i_{t}=-\mathrm{log}\,E_{t}\mathrm{exp}\,( m_{t+1}) .$$ Assuming conditional lognormality, then, we have (1)it=−Emt+1−0.5Vartmt+1. Except for lognormality, the content of their model for it is then the specification of the stochastic process generating mt+1. They take it to be (2)−mt+1=δ+z1t+σ1ε1t+1=1−λ2/2z2t+z2t0.5λε2t+1+σ3ε3t+1, where $$\varepsilon _{1t},$$ $$\varepsilon _{2t},$$ and $$\varepsilon _{3t}$$ are independent, standard normal, white‐noise innovations and where (3)z1t+1=z1t+σ1ε1t+1. (4)z2t+1=1−φθ+φz2t+z2t0.5σ2ε2t+1. These processes are chosen with an eye to their implications for the term structure via the relation (5)1=Etexpmt+1+pt+1k−1, which characterizes an absence of arbitrage possibilities for k‐period bonds with prices, $$p^{k-1}_{t+1}$$. From these prices the analyst can calculate term structure measures.Finally, Atkeson and Kehoe calibrate the model by assuming that $$\lambda =\sqrt{2}$$, $$\varphi =0.99,$$ and $$\sigma _{2}=0\mathrm{.}\,017$$. This specification suffices, they report, to generate interest rates of different maturities such that the term structure features long and short rates that possess properties that have the general characteristics found in their exploration of monthly data for rates of various maturities in the U.S. data.How does this model compare in specification with the standard three‐equation framework used in recent years to model one‐period interest rates, consumption (and/or output), and inflation by Clarida, Gali, and Gertler (1999), McCallum (2001), Woodford (2003, 238–47), and dozens of other monetary economists? That framework, as is well known, consists of (i) a consumption Euler equation (aka expectational IS relation), (ii) a price adjustment relation (usually of the Calvo variety), and (iii) a monetary policy rule that specifies adjustments of the one‐period nominal policy rate it to its determinants, which include the steady state real interest rate, the central bank’s inflation target, departures of inflation from target, and departures of output from its natural (flexible price) rate. (The lagged rate it‐1 is often included as well to represent smoothing.) This framework implicitly adopts the expectations theory of the term structure, which is known to be inconsistent with the data. Notable examples of larger models that include more variables and equations but that have the same basic underlying logic are provided by Christiano et al. (2005) and Smets and Wouters (2007).One aspect of the comparison is that the Atkeson‐Kehoe model, since it pertains to an “endowment economy,” implicitly assumes that price level adjustments are complete within each period so that output is always equal to its (exogenous) natural rate, flexible price value. Only a degenerate version of the Calvo equation component of the standard model is therefore present. That removes one endogenous variable, output/consumption. For some purposes, a flexible price model can be useful for monetary policy principles, as in Woodford (2003, chap. 2). But Atkeson and Kehoe also treat inflation as exogenous. Thus, there is no possibility remaining for conducting monetary policy analysis, and it is not determined by central bank behavior. Those features are consistent with their expressed view that the central bank “simply responds to exogenous changes in real risk—specifically, to exogenous changes in the conditional variance of the real pricing kernel—with the aim of maintaining inflation close to a target level.” But this seems highly unsatisfactory. It is probably true that a substantial portion of the meeting‐to‐meeting variations in the federal funds rate in the United States represents adjustments that are responses to changes in real rates that are brought about by changes in tastes, technology, shocks from abroad, and even perhaps some random behavioral errors by private agents. In fact, this is implied by much of the analysis that represents today’s mainstream monetary policy analysis—see, for example, Woodford (2003, and But the modeling approach suggested by Atkeson and Kehoe that the its for a random that is it no in a no is provided that their model would do a of matching data on much less two variables as endogenous and by central bank by a policy rule for a variable, the model is not in for monetary et al. (2007) paper is by Atkeson and and Kehoe to believe that standard have Euler equations that include no term reflecting and Kehoe are to say that the Euler equation specification in many monetary models does not well empirically. In addition, their specification of stochastic processes for the and variables that yield a pricing kernel that term structure features that the data in important ways is and They in that models in which conditional variances of returns are variable provide an possibility for improved model specification. This is not of course, and does not of inflation and output as exogenous or to a model that leads to their highly about the nature of monetary policy in the United States (and, other and currency is a of the monetary policy that term structure that pricing with time‐varying risk premia in models along with endogenous price and monetary policy rules. Some leading examples are provided by and and et al. (2007), and These have beyond Atkeson and Kehoe in to models that the term structure regularities maintaining a framework for monetary policy analysis. the approach time‐varying conditional is not the only one of as the Collard and Dellas (2007) example In I by of the Atkeson and Kehoe critique of some features of today’s New Keynesian monetary policy models, but I their current to be in essential their of U.S. monetary policy to be and their critique of current monetary policy analysis to be a brief see Atkeson, and 2007. “If Exchange Rates Are Random Walks, Then Almost Everything We Say about Monetary Policy Is and in Cumby, and T. 2007. and of Monetary in Eichenbaum, and and the of a to Monetary of in Gali, and of Monetary A New Keynesian of in and 2007. and Monetary paper, of in Keynesian Empirical of in Keynesian and to the of in and of a of in and 2007. with of in and McCallum and the of of Monetary in 2007. “How the World Achieved Consensus on Monetary of in T. in Monetary Policy The of and of in of in Monetary Policy to and in 2007. and Monetary paper, of University of in and in and 2007. and in A in and of a of Monetary University in Previous articleNext article by NBER by the of on this by the of no articles this

  • Research Article
  • Cite Count Icon 1
  • 10.22459/ag.22.01.2015.01
Australia and the Zero Lower Bound on Interest Rates: Some Monetary Policy Options
  • Dec 7, 2015
  • Agenda - A Journal of Policy Analysis and Reform
  • Declan Trott

Could 'it' happen here?Australia survived the global financial crisis relatively unscathed, despite much higher interest rates than in other developed countries (see Figure 1). But our exceptional status may be short-lived. In May 2015, the cash rate was cut to 2 per cent, the lowest level since the current series began in 1990, and by some measures the lowest short-term rate since at least 1960. Nevertheless, unemployment has risen steadily, despite the fact that investment in the resources sector has yet to fall back to normal levels.It would seem prudent, then, to be prepared for interest rates to approach zero, as has already occurred in most other OECD countries.2 Sheehan and Gregory (2013) and Freebairn and Corden (2013) have called for increased infrastructure spending on this basis. Yet, as these authors acknowledge, such a policy faces considerable challenges. Furthermore, if warnings of 'secular stagnation' have any validity, near-zero interest rates may not be a once-in-a-generation emergency to be dealt with using ad hoc expedients, but an increasingly common situation requiring a more systematic response. In this case, changes to monetary policy would appear more desirable.This paper considers various means by which monetary policy may stimulate aggregate demand when its usual instrument - the short-term interest rate - is unavailable. The objectives of monetary policy are taken here to be the conventional ones of a stable value of money and full employment, analysed under the broadly 'Keynesian' assumption that fluctuations in nominal spending have significant effects on real output. Financial stability objectives are assumed to be dealt with separately via (macro)prudential regulation.3The paper begins by describing the problem of the zero lower bound, and the logic of expectations management in general and level targeting in particular as a solution. It then argues, however, that level targeting requires a more careful choice of target variable than the current regime of inflation targeting, and that the price level, nominal GDP and nominal wages all appear problematic. A more eclectic mix of policies is then considered.The zero lower bound, and the vital role of expectationsAs long as money can be stored at negligible cost, interest rates cannot fall below zero, because hoarding money would then be a superior alternative to lending it. This poses a problem for conventional monetary policy, under which the usual response to falling inflation or rising unemployment is a lower interest rate. What happens if the interest rate cannot be cut any further? The experience of Japan in the 1990s, and much of the world since 2008, shows that near-zero interest rates are perfectly compatible with low and falling inflation, high and rising unemployment, and output well below previous estimates of potential.Can monetary policy still be effective in such a situation? Theory suggests that expectations management is the key. In the standard permanent income or life-cycle model, current spending is determined by expected future income and interest rates. If promises about future policy are successful in changing these expectations, they may increase demand without any immediate change in the central bank's balance sheet. Conversely, even a very large balance sheet expansion may be ineffective if it is believed to be temporary and thus does not change expectations.4 Krugman (1998), motivated by the then unusual experience of Japan, showed that, while a temporary monetary expansion would fail to raise prices and output at the zero lower bound (since it cannot change the current interest rate, and, being temporary, will not affect any future variables), a credible permanent expansion could work, by increasing the expected future price level and therefore reducing the real interest rate. (This assumes that a permanently higher money supply must eventually create proportionally higher prices at some point in the future when the interest rate rises above zero. …

  • Research Article
  • Cite Count Icon 126
  • 10.1353/eca.2017.0004
Monetary Policy in a Low Interest Rate World
  • Jan 1, 2017
  • Brookings Papers on Economic Activity
  • Michael T Kiley + 1 more

Nominal interest rates may remain substantially below the averages of the last half century, because central banks' inflation objectives lie below the average level of inflation, and estimates of the real interest rate that are likely to prevail over the long run fall notably short of the average real interest rate experienced during this period. Persistently low nominal interest rates may lead to more frequent and costly episodes at the effective lower bound (ELB) on nominal interest rates. We revisit the frequency and potential costs of such episodes in a world of low interest rates, using both a dynamic stochastic general equilibrium (DSGE) model and the Federal Reserve's large-scale econometric model, the FRB/US model. Four main conclusions emerge. First, monetary policy strategies based on traditional, simple policy rules lead to poor economic performance when the equilibrium interest rate is low, with economic activity and inflation more volatile and systematically falling short of desirable levels. Moreover, the frequency and length of ELB episodes under such policy approaches are estimated to be significantly higher than in previous studies. Second, a risk adjustment to a simple rule—whereby monetary policymakers are more accommodative, on average, than prescribed by the rule—ensures that inflation averages its 2 percent objective, and requires that policymakers systematically seek inflation near 3 percent when the ELB is not binding. Third, commitment strategies, whereby monetary accommodation is not removed until either inflation or economic activity overshoots its long-run objective, are very effective in both the DSGE and FRB/US models. And fourth, our simulation results suggest that the adverse effects on economic and price stability associated with the ELB may be substantial at inflation targets near 2 percent if the equilibrium real interest rate is low and monetary policy follows a traditional approach. Whether such adverse effects could justify a higher inflation target depends upon the degree to which monetary policy strategies that differ substantially from such traditional approaches are feasible, and an assessment of a broader array of the inflation target's effects on economic welfare.

  • Research Article
  • Cite Count Icon 1
  • 10.1086/593092
Comment
  • Jan 1, 2008
  • NBER Macroeconomics Annual
  • John H Cochrane

Comment

  • Front Matter
  • Cite Count Icon 1
  • 10.1111/j.1467-8462.2008.00501.x
Editor's Introduction
  • Jun 1, 2008
  • Australian Economic Review
  • Stephen T Sedgwick

Editor's Introduction

  • Research Article
  • Cite Count Icon 33
  • 10.1086/669584
Taylor Rule Exchange Rate Forecasting during the Financial Crisis
  • Mar 1, 2013
  • NBER International Seminar on Macroeconomics
  • Tanya Molodtsova + 1 more

Previous articleNext article FreeTaylor Rule Exchange Rate Forecasting during the Financial CrisisTanya Molodtsova and David H. PapellTanya MolodtsovaEmory University Search for more articles by this author and David H. PapellUniversity of Houston Search for more articles by this author PDFPDF PLUSFull Text Add to favoritesDownload CitationTrack CitationsPermissionsReprints Share onFacebookTwitterLinked InRedditEmailQR Code SectionsMoreI. IntroductionThe past few years have seen a resurgence of academic interest in out-of-sample exchange rate predictability. Gourinchas and Rey (2007, using an external balance model); Engel, Mark, and West (2008, using monetary, Purchasing Power Parity [PPP], and Taylor rule models); and Molodtsova and Papell (2009, using a variety of Taylor rule models) all report successful results for their models vis-à-vis the random walk null. There has even been the first revisionist response. Rogoff and Stavrakeva (2008) criticize the three abovementioned papers for their reliance on the Clark and West (2006) statistic, arguing that it is not a minimum mean squared forecast error statistic.An important problem with these papers is that none of them use real-time data that was available to market participants.1 Unless real-time data is used, the "forecasts" incorporate information that was not available to market participants, and the results cannot be interpreted as successful out-of-sample forecasting. Faust, Rogers, and Wright (2003) initiated research on out-of-sample exchange rate forecasting with real-time data. Molodtsova, Nikolsko-Rzhevskyy, and Papell (2008) use real-time data to estimate Taylor rules for Germany and the United States and forecast the Deutsche mark/dollar exchange rate out-of-sample for 1989:Q1 to 1998:Q4. Molodtsova, Nikolsko-Rzhevskyy, and Papell (2011), henceforth MNP (2011), use real-time data to show that inflation and either the output gap or unemployment, variables which normally enter central banks' Taylor rules, can provide evidence of out-of-sample predictability for the US dollar/euro exchange rate from 1999 to 2007. Adrian, Etula, and Shin (2011) show that the growth of US dollar-denominated banking sector liabilities forecasts appreciations of the US dollar from 1997 to 2007, but their results break down in 2008 and 2009.Molodtsova and Papell (2009) conduct out-of-sample exchange rate forecasting with Taylor rule fundamentals, using the variables, including inflation rates and output gaps, that normally comprise Taylor rules. Engel, Mark, and West (2008) propose an alternative methodology for Taylor rule out-of-sample exchange rate forecasting. Using a Taylor rule with prespecified coefficients for the inflation differential, output gap differential, and real exchange rate, they construct the interest rate differential implied by the policy rule and use the resultant differential for exchange rate forecasting. We use a single equation version of their model, which we call the Taylor rule differentials model.2 Since there is no evidence that either the Fed or the European Central Bank (ECB) targets the exchange rate, we do not include the real exchange rate in the forecasting regression for either model.3Out-of-sample exchange rate forecasting with Taylor rule fundamentals received blogosphere, as well as academic, notice in 2008. On July 28 and September 9, Menzie Chinn posted on Econbrowser a discussion of in-sample estimates of one of the specifications used in an early version of MNP (2011).4 On August 17, he posted an article by Michael Rosenberg of Bloomberg, who discussed Taylor rule fundamentals as a foreign currency trading strategy. By December 22, however, optimism had turned to pessimism. Once interest rates hit the zero lower bound, they cannot be lowered further. With zero or near-zero interest rates for Japan and the United States, and predicted near-zero rates for the United Kingdom and the Euro Area, the prospects for Taylor rule exchange rate forecasting were bleak. A second theme of the post, however, was that there was nothing particularly promising on the horizon. Going back to the monetary model, even in a regime of quantitative easing, faced doubtful prospects for success.5The events of 2007 to 2009 focused the attention of economists on the importance of financial conditions. On August 9, 2007, the spread between the dollar London interbank offer rate (Libor) and the overnight index swap (OIS), an indicator of financial stress in the interbank loan market, jumped from 13 to 40 basis points on concerns that problems in the subprime mortgage market were spreading to the broader mortgage market.6 The spreads mostly fluctuated between 50 and 90 basis points until September 17, 2008, when they spiked following the announcement that Lehman Brothers had filed for bankruptcy, peaking on October 10 at over 350 basis points. Following the end of the panic phase of the financial crisis in October, 2008, the spread gradually returned to near precrisis levels in September 2009. The spread increased again, although not nearly as sharply, in mid-2010 and late 2011. The spreads are depicted in figure 1.Fig. 1. Credit spreads and financial stress indexes with their differentialsView Large ImageDownload PowerPointThe deteriorating financial situation in late 2007 and 2008 inspired several proposals for linking monetary policy to financial conditions. Mishkin (2008) argued that, when a financial disruption occurs, the Fed should cut interest rates to offset the negative effects of financial turmoil on aggregate economic activity. McCully and Toloui (2008) suggested that, because of tightened financial conditions, the Fed needed to lower the policy rate by 100 basis points in early February 2008 in order to keep the neutral rate constant. Meyer (2009) argued that the Taylor rule without considerations of financial conditions could not explain aggressive Fed policy in early 2008.Taylor (2008) proposed adjusting the systematic component of monetary policy by subtracting a smoothed version of the Libor-OIS spread from the interest rate target that would otherwise be determined by deviations of inflation and real GDP from their targets according to the Taylor rule. He argued that such an adjustment, which would have been about 50 basis points in late February 2008, would be a more transparent and predictable response to financial market stress than a purely discretionary adjustment.Curdia and Woodford (2010) modify the Taylor rule with an adjustment for changes in interest rate spreads. Using a dynamic stochastic general equilibrium (DSGE) model with credit frictions, they show that incorporating spreads can improve upon a standard Taylor rule, although the optimal size of the adjustment is smaller than proposed by Taylor and depends on the source of variation in the spreads.The spread between the euro interbank offer rate (Euribor) and the euro OIS also jumped in August 2007 and spiked in September and October 2008, although not by as much as the US spread. While the Euribor-OIS spread came down in September 2009, it did not return to its precrisis levels. During August and December 2010, the spread jumped to as high as 40 basis points and, in December 2011, reached a maximum of 100 basis points. The end-of-quarter Libor-OIS, Euribor- OIS, and the difference between the Libor-OIS and Euribor-OIS spreads are depicted in figure 1. After the gap between the two spreads narrowed in 2008:Q4, the spread turned against the Euro Area, reaching a maximum in 2011:Q3 and 2011:Q4 before narrowing in 2012:Q1.This paper investigates out-of-sample exchange rate forecasting during the financial crisis with Taylor rule-based models that incorporate indicators of financial stress. We use one-quarter-ahead forecasts and estimate models with core inflation and both the output gap and the unemployment gap for the Taylor rule fundamentals and Taylor rule differentials models.7 When the Libor-OIS/Euribor-OIS differential is included in the forecasting regression, we call the models spread-adjusted Taylor rule fundamentals and differentials models. According to these models, when the Libor-OIS spread increases, the Fed would be expected to either lower the interest rate or, if it had already attained the zero lower bound, engage in quantitative expansion, depreciating the dollar. When the Euribor-OIS spread increases, the ECB would be expected to react similarly, depreciating the euro. We therefore use the difference between the Libor-OIS and Euribor-OIS spreads in addition to the difference between the United States and Euro Area inflation rates and output gaps for out-of-sample forecasting of the dollar/euro exchange rate.Another widely used credit spread is the Ted spread, the three-month Libor/three-month Treasury spread for the United States and the three-month Euribor/three-month Treasury spread for the Euro Area. As shown in figure 1, the US Ted spread was generally higher than the Euro Area Ted spread until 2008 and the Ted spread differential was more variable than the Libor-OIS/Euribor-OIS differential. The Euro Area Ted spread spiked with the US Ted spread in 2008:Q3, and so the differential does not display a spike at the peak of the financial crisis. Subsequent to the financial crisis, the Ted spread differential is similar to the Libor-OIS/Euribor-OIS differential. It turns against the Euro Area in 2009, reaches a maximum in 2011:Q3 and 2011:Q4, and narrows in 2012:Q1. We use the difference between the US and Euro Area Ted spreads as an alternative indicator of financial stress.Financial Conditions Indexes (FCIs) that summarize information about the future state of the economy contained in a number of current financial variables have received considerable attention in recent years. Hatzius et al. (2010) show that FCIs outperform individual financial variables that are considered to be useful leading indicators in their ability to predict the growth of different measures of real economic activity. We therefore augment the Taylor rule by using the difference between the Bloomberg and Organization for Economic Cooperation and Development (OECD) FCIs for the United States and the Euro Area for out-of-sample forecasting of the dollar/euro exchange rate.8 The Bloomberg and OECD FCIs are depicted in figure 1 where, in contrast to the credit spreads, an increase represents an improvement in financial conditions. Financial conditions deteriorate sharply for both the United States and the Euro Area in late 2008, but turn in favor of the United States starting in 2009.Real-time data for the United States is available in vintages starting in 1966, with the data for each vintage going back to 1947. Real-time data for the Euro Area, however, is only available in vintages starting in 1999:Q4, with the data for each vintage going back to 1991:Q1. While the euro/dollar exchange rate is only available since the advent of the euro in 1999, "synthetic" rates are available since 1993. We use rolling regressions to forecast exchange rate changes starting in 1999:Q4, with 26 observations in each regression. Keeping the number of observations constant, we report results ending in 2007:Q1, with 30 forecasts, through 2012:Q1, with 50 forecasts. We report the ratio of the mean squared prediction errors (MSPE) of the linear and random walk models and the CW test statistic of Clark and West (2006).9The Taylor rule fundamentals model with the unemployment gap produces very strong results. The MSPE of the Taylor rule model is smaller than the MSPE of the random walk model and the random walk null can be rejected in favor of the Taylor rule model using the CW test at the 5 percent level for the initial set of forecasts ending in 2007:Q1. As the number of forecasts increases, the MSPE ratios decrease and the strength of the rejections increases, peaking at the 1 percent level in 2008:Q1. In the following quarter, 2008:Q2, the MSPE ratios start to rise and continue to increase through 2009:Q1 (although the rejections continue at the 5 percent level or higher). Starting in mid-2009, the MSPE ratios stabilize and the random walk can be rejected in favor of the Taylor rule model at the 5 percent significance level for all specifications between 2009:Q2 and 2012:Q1.The results for the other models are not as strong. For the Taylor rule differentials model with the output gap, the random walk null can be rejected at the 10 percent level or higher from 2007:Q1 to 2008:Q3 and 2009:Q2 to 2009:Q4, but not otherwise. For the Taylor rule fundamentals model with the output gap and the Taylor rule differentials model with the unemployment gap, the random walk null can only be rejected at the 10 percent level or higher from 2007:Q1 to 2008:Q2.A major innovation in this paper is to incorporate indicators of fi-nancial stress, measured by the difference between the Libor-OIS and Euribor-OIS spreads, the US and Euro Area Ted spreads, the US and Euro Area Bloomberg FCIs, and the US and Euro Area OECD FCIs, for out-of-sample exchange rate forecasting with Taylor rule models. The strongest results are again for the Taylor rule fundamentals model with the unemployment gap. Using the OECD FCI, the random walk null can be rejected in favor of the linear model alternative at the 5 percent level for all but one set of forecasts, and at the 10 percent level for the remaining forecast. Using the three other indicators, the null can be rejected at the 10 percent level or higher for over half of the forecasts, with the strongest results for the forecasts ending between 2007 and 2009. As with the original Taylor rule model, the augmented Taylor rule differentials model with the output gap is the next most successful, with the random walk null rejected at the 10 percent level or higher for all forecasts using the OECD FCI and at the 10 percent level or higher for over half of the forecasts with the three other indicators. The rejections for the other two augmented models are concentrated in 2007 and 2008.We proceed to compare the original and augmented models for the two most successful specifications. For the Taylor rule fundamentals models with the unemployment gap, the original model null can be rejected in favor of the augmented model alternative at the 5 percent level for virtually every set of forecasts ending between 2007:Q1 to 2008:Q2 for all four financial stress indicators. For the forecasts ending between 2008:Q3 and 2012:Q1, however, the original model null is never rejected. For the Taylor rule differentials model with the output gap, there is some evidence in favor of the alternative specification with the Ted spread, Bloomberg FCI, and OECD FCI.We also compare the out-of-sample performance of the Taylor rule models with the monetary, PPP, and interest rate differentials models. For the interest rate differentials model, the MSPE ratios are below one and the random walk can be rejected with the CW tests from 2007:Q1 to 2008:Q2. Starting with the panic period of the financial crisis in 2008:Q3, the MSPE ratios rise above one and the random walk null can only be rejected for the forecasts ending in 2009:Q1 and 2012:Q1. The monetary and PPP models cannot outperform the random walk for any forecast interval. The evidence of out-of-sample exchange rate predictability is much stronger with the Taylor rule models than with the traditional models.II. Exchange Rate Forecasting ModelsEvaluating exchange rate models out of sample was initiated by Meese and Rogoff (1983), who could not reject the naïve no-change random walk model in favor of the existent empirical exchange rate models of the 1970s. Starting with Mark (1995), the focus of the literature shifted toward deriving a set of long-run fundamentals from different models, and then evaluating out-of-sample forecasts based on the difference between the current exchange rate and its long-run value. Engel, Mark, and West (2008) use the interest rate implied by a Taylor rule, and Molodtsova and Papell (2009) use the variables that enter Taylor rules to evaluate exchange rate forecasts.A. Taylor Rule Fundamentals ModelWe examine the linkage between the exchange rate and a set of variables that arise when central banks set the interest rate according to the Taylor rule. Following Taylor (1993), the monetary policy rule postulated to be followed by central banks can be specified aswhere it is the target for the short-term nominal interest rate, πt is the inflation rate, is the target level of inflation, yt is the output gap, the percent deviation of actual real GDP from an estimate of its potential level, and R is the equilibrium level of the real interest rate.10According to the Taylor rule, the central bank raises the target for the short-term nominal interest rate if inflation rises above its desired level and/or output is above potential output. The target level of the output deviation from its natural rate yt is 0 because, according to the natural rate hypothesis, output cannot permanently exceed potential output.The target level of inflation is positive because it is generally believed that deflation is much worse for an economy than low inflation. The unemployment gap, the difference between the unemployment rate and the natural rate of unemployment, can replace the output gap in equation (1) as in Blinder and Reis (2005) and Rudebusch (2010). In that case, the coefficient γ would be negative so that the Fed raises the interest rate when the unemployment rate is below the natural rate of unemployment. Taylor assumed that the output and inflation gaps enter the central bank's reaction function with equal weights of 0.5 and that the equilibrium level of the real interest rate and the inflation target were both equal to 2 percent.The parameters and R in equation (1) can be combined into one constant term, , which leads to the following equation, where λ = 1 + ϕ. Because λ > 1, the real interest rate is increased when inflation rises, and so the Taylor principle is satisfied. Following Taylor (2008) and Curdia and Woodford (2010), the original Taylor rule can be modified by subtracting a multiple of the spread between the dollar Libor rate and the OIS rate, where st is the spread.We do not incorporate several modifications of the Taylor rule that, following Clarida, Galí, and Gertler (1998), are typically used for estimation. Lagged interest rates are usually included in estimated Taylor rules to account for either (a) partial adjustment of the federal funds rate to the rate desired by the Federal Reserve, or (b) desired interest rate smoothing on the part of the Federal Reserve. Since the most successful exchange rate forecasting specifications for the dollar/euro rate in MNP (2011) did not include a lagged interest rate and Walsh (2010) shows that the Federal Reserve lowered the interest rate during the financial crisis faster than would be consistent with interest rate smoothing, we do not include lagged interest rates. The real exchange rate is often included in specifications that involve countries other than the United States. Since there is no evidence that the ECB uses the real exchange rate as a policy objective and inclusion of the real exchange rate worsens exchange rate forecasts in MNP (2011), we do not include it. Finally, while inflation forecasts are often used on the grounds that Federal Reserve policy is forward looking, there is no publicly available data on euro area core inflation forecasts.To derive the Taylor rule based forecasting equation, we construct the implied interest rate differential by subtracting the interest rate reaction function for the Euro Area from that for the United States: where asterisks denote Euro Area variables and α is a constant. It is assumed that the coefficients on inflation and the output gap are the same for the United States and the Euro Area, but the inflation targets and equilibrium real interest rates are allowed to differ.11Based on empirical research on the forward premium and delayed overshooting puzzles by Eichenbaum and Evans (1995), Faust and Rogers (2003) and Scholl and Uhlig (2008), and the results in Gourinchas and Tornell (2004) and Bacchetta and van Wincoop (2010), who show that an increase in the interest rate can cause sustained exchange rate appreciation if investors either systematically underestimate the persistence of interest rate shocks or make infrequent portfolio decisions, we postulate the following exchange rate forecasting equation:12where asterisks denote Euro Area variables, ω is a constant, and ωπ, ωy, and ωs are positive coefficients. Alternatively, the unemployment gap differential (with opposite sign) can substitute for the output gap differential in equation (5).The variable et is the log of the US dollar nominal exchange rate determined as the domestic price of foreign currency, so that an increase in et is a depreciation of the dollar. The reversal of the signs of the coefficients between (4) and (5) reflects the presumption that anything that causes the Fed and/or ECB to raise the US interest rate relative to the Euro Area interest rate will cause the dollar to appreciate (a decrease in et). Since we do not know by how much a change in the interest rate differential (actual or forecasted) will cause the exchange rate to adjust, we do not have a link between the magnitudes of the coefficients in (4) and (5).13The difference between the US and Euro Area Ted spreads, Bloomberg FCIs, and OECD FCIs can also be used as the measure of the spread differential. An increase in the US spreads relative to the Euro Area spreads would cause forecasted dollar depreciation. Because the FCIs are constructed so that an increase represents an improvement in financial conditions, the sign of the coefficient on the FCI differentials would be negative so that a relative deterioration in US financial conditions would still lead to forecasted dollar depreciation.B. Taylor Rule Differentials ModelEngel, Mark, and West (2008) propose an alternative Taylor rule based model, which we call the Taylor rule differentials model to differentiate it from both the interest rate differentials model and the Taylor rule fundamentals model. They posit, rather than estimate, coefficients for the Taylor rule and subtract the interest rate reaction function for the Euro Area from that for the United States to obtain implied interest rate differentials,where the constant is equal to zero, assuming that the inflation target and equilibrium real interest rate are the same for the United States and the Euro Area. Out-of-sample exchange rate forecasting is conducted using single equation and panel error correction models.14We estimate a variant of the Taylor rule differentials model with two measures of economic activity–OECD estimates of the output gap and the unemployment gap. In order to obtain an implied interest rate differential that corresponds to the implied interest rate differential (6) with the unemployment gap as the measure of real economic activity, we use a coefficient of -1.0. This is consistent with a coefficient of 0.5 on the output gap if the Okun's law coefficient is 2.0.The Taylor rule differential model using Taylor's original coefficients would have a coefficient of 1.5 on the inflation differential, 0.5 on the output gap differential, and would not include the real exchange rate.15 During 2009 and 2010, a number of commentators, most notably Rudebusch (2010), argued that the appropriate output or unemployment gap coefficient in the Taylor rule for the United States should be double the coefficient in Taylor's original rule. While there has been an active policy debate on the normative question of whether prescribed Taylor rule interest rates should be calculated using Taylor's original specification or with larger coefficients, it is clear that the latter provide a better fit for Fed policy in the 2000s.16 Since the same argument has not been made for the ECB, we implement this by estimating a Taylor rule differentials model with a coefficient of 1.0 on the output gap (or -2.0 on the unemployment gap) for the United States and 0.5 on the output gap (or -1.0 on the unemployment gap) for the ECB, where α is a constant.The implied interest rate differential can be used to construct an exchange rate forecasting equation, where, as in the Taylor rule fundamentals model, the signs of the coefficients switch and we do not have a

  • Research Article
  • Cite Count Icon 2
  • 10.1111/1467-923x.12647
Macroeconomic Policy Beyond Brexit
  • Feb 21, 2019
  • The Political Quarterly
  • Simon Wren‐Lewis

Macroeconomic Policy Beyond Brexit

  • Research Article
  • 10.1086/657549
Discussion
  • Jan 1, 2011
  • NBER Macroeconomics Annual

Discussion

  • Research Article
  • Cite Count Icon 12
  • 10.1111/ecaf.12513
Monetary policy in a world of radical uncertainty
  • Feb 1, 2022
  • Economic Affairs
  • Mervyn King

Monetary policy in a world of radical uncertainty

  • Research Article
  • Cite Count Icon 3
  • 10.1007/s10272-015-0559-6
When Low Interest Rates Cause Low Inflation
  • Nov 1, 2015
  • Intereconomics
  • Markus Demary + 1 more

A new theory of interest rates, the Neo-Fisherian theory, predicts a low inflation rate due to a central bank’s low interest rate. After several years of near-zero interest rate policies and low and even negative inflation rates in the eurozone and in the US, this theory gained momentum in academic circles. Indeed, central banks have had a hard time reaching their inflation targets. This paper argues that it is not the low central bank policy rate which causes low inflation but rather the low equilibrium real interest rate, the economy’s real interest rate under full employment and stable prices, in combination with the zero lower bound on nominal interest rates, which restricts the effectiveness of monetary policy and causes low inflation. In order to stabilise inflation in the medium term, higher equilibrium real interest rates are necessary. Since monetary policy cannot move the equilibrium real interest rate, structural policies are needed.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.