Understanding macroeconomic volatility in South Korea: a heterogeneous-agent New Keynesian model with sticky prices framework

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

ABSTRACT This study employs a Heterogeneous Agent New Keynesian (HANK) model to analyze the impact of price stickiness on macroeconomic dynamics, policy responses, and distributional effects in South Korea. Using Impulse Response Functions (IRFs), it estimates the responses of output, real interest rates, and household consumption to monetary policy shocks and variations in the price stickiness parameter over an extended period. Our findings reveal that higher price stickiness amplifies initial deviations in economic variables and slows their return to equilibrium, while reduced stickiness leads to quicker convergence. Additionally, we explore the influence of price stickiness on household income and labor supply decomposition and compare the results to a Representative Agent New Keynesian (RANK) model. Finally, the study assesses the role of borrowing constraints in the HANK model, concluding that relaxing such constraints can lead to a more equitable distribution of assets and an increase in aggregate spending.

Similar Papers
  • Research Article
  • Cite Count Icon 2
  • 10.1086/657533
Comment
  • Jan 1, 2011
  • NBER Macroeconomics Annual
  • Lawrence J Christiano

Comment

  • Research Article
  • Cite Count Icon 3
  • 10.1086/696069
Distortions in Macroeconomics
  • Apr 1, 2018
  • NBER Macroeconomics Annual
  • Olivier Blanchard

Previous article FreeDistortions in MacroeconomicsOlivier BlanchardOlivier BlanchardPeterson Institute for International Economics and NBER Search for more articles by this author Peterson Institute for International Economics and NBERPDFPDF PLUSFull Text Add to favoritesDownload CitationTrack CitationsPermissionsReprints Share onFacebookTwitterLinked InRedditEmailQR Code SectionsMoreAfter-dinner talks are the right places to test tentative ideas, hoping for the indulgence of the audience. Mine will be in that spirit, and reflect my thoughts on what I see as a central macroeconomic question: What are the distortions that are central to understanding short-run macroeconomic evolutions?I shall argue that, over the past 30 years, macroeconomics had, to an unhealthy extent, focused on a one-distortion (nominal rigidities), one-instrument (policy rate) view of the macro economy. As useful as the body of research that came out of this approach was, it was too reductive, and proved inadequate when the Great Financial Crisis came. We need, even in our simplest models, to take into account more distortions. Having stated the general argument, I shall turn to a specific example and show how this richer approach modifies the way we should think about policy responses to the low neutral interest rates we observe in advanced economies today.Let me develop this theme in more detail.Back in my student days, that is, the mid-1970s, much of macroeconomic research was focused on building larger and larger macroeconometric models based on the integration of many partial equilibrium parts. Some researchers worked on explaining consumption, others on explaining investment, or asset demands, or price and wage setting. The empirical work was motivated by theoretical models, but these models were taken as guides rather than as tight constraints on the data. The estimated pieces were then put together in larger models. The behavior captured in the estimated equations reflected in some ways both optimization and distortions, but the mapping was left, it was felt by necessity, implicit and somewhat vague. (I do not remember hearing the word “distortions” used in macro until the 1980s.)These large models were major achievements. But, for various reasons, researchers became disenchanted with them. Part of it was obscurity: the parts were reasonably clear, but the sum of the parts often had strange properties. Part of it was methodology: identification of many equations was doubtful. Part of it was poor performance: the models did not do well during the oil crises of the 1970s. The result of disappointment was a desire to go back to basics.For my generation of students, three papers played a central role. One was the paper by Robert Lucas (1973) on imperfect information. The other two were the papers by Stanley Fischer (1977) and by John Taylor (1980) on nominal rigidities. While the approaches were different, the methodology was similar: the focus was on the effects of one distortion—imperfect information leading to incomplete nominal adjustment in the case of Lucas, and explicit nominal rigidities, without staggering of decisions in Fischer, with staggering of decisions in Taylor. All other complications were cast aside to focus on the issue at hand, the role of nominal rigidities and the implied nonneutrality of money.Inspired by these models, further work then clarified the role of monopolistic competition, the role of menu costs, and the role of different staggering structures, showing how each of them shaped the dynamic effects of nominal shocks. The natural next step was the re-integration of these nominal rigidities in a richer, microfounded, general equilibrium model. The real business cycle model, developed by Kydland and Prescott (1982), provided the simplest and most convenient environment. Thus was born the New Keynesian (NK) model, a slightly odd marriage of the most neoclassical model and an ad hoc distortion. But it was a marriage that has held together to this day.In the hands of researchers like Woodford (2003) or Clarida, Gali, and Gertler (1999), the model provided the basis, or at least the intellectual support, for the development of a new approach to monetary policy, that is, inflation targeting, an approach adopted by most central banks around the world. It had a rich set of implications, with their origin deriving from the basic conceptual structure: one distortion, that is, nominal rigidities in some form (often the convenient Poisson form derived by Calvo), and one instrument, the nominal policy rate. The right use of the instrument could largely undo the distortion. Maintaining constant and low inflation would both minimize distortions and lead to the right level of output, a proposition Jordi Gali and I baptized, tongue in cheek, the Divine Coincidence (Blanchard and Gali 2007).What I have described is obviously a caricature. First, but this is minor, there had to be at least another distortion: to talk about price setting, firms had to have some pricing power, and this led to a monopoly markup. Under Dixit-Stiglitz constant elasticity assumptions, the markup, however, was constant, and the effects of the distortion were largely irrelevant with respect to the effects of monetary policy. Second, some models had more than one nominal rigidity, for example, rigidities in both wage and price setting as in Erceg, Henderson, and Levin (2000); some models combined real and nominal rigidities, for example, in my work with Gali (Blanchard and Gali 2007). Third, there was important work on credit (e.g., Bernanke and Gertler 1989) and on liquidity (e.g., Diamond and Dybvig 1983; Holmstrom and Tirole 1998). But, while these papers were well known and some of these mechanisms were integrated in DSGE models, they did not become part of the basic model. (I remember telling Bengt that, while I admired his work on liquidity with Jean, I was not sure how central it was to macro.) Stable and low inflation as the target, and the use of the policy rate as the instrument, remained the basic approach to policy.Even before the Great Financial Crisis, I felt some unease with two characteristics of the basic model and its larger DSGE cousins (Blanchard 2009). The first was that the deep reasons behind nominal rigidities, such as the costs of collecting information or of taking decisions were probably relevant beyond price or wage setting, and thus were relevant for consumption, investment, and portfolio choices, with important but neglected implications for macroeconomic dynamics. The second that the models assumed much too much forward lookingness to agents. When combined with rational expectations, the implications of the Euler equation for consumption, or the interest parity condition for exchange rates, were simply counterfactual.The financial crisis then made it clear that the basic model, and even its DSGE cousins, had other serious problems, that the financial sector was much more central to macroeconomics than had been assumed. Financial markets were incomplete, raising issues of solvency and liquidity. The role and the importance of debt were central to understanding credit booms and busts. Bank runs were not just a historical footnote, but an essential aspect of maturity transformation. These distortions were at the core of the crisis; nominal rigidities may have made it worse, but even absent nominal rigidities, the financial crisis would likely have led to a large decrease in output.Since the start of the crisis, DSGE models have been extended to allow for a richer financial sector and integrate some of these distortions (e.g., Gertler and Kiyotaki 2013). But I feel we still do not have the right core model. Put another way, suppose that we were building a small macroeconomic model from scratch. What are, say, the three distortions we would deem essential to have in such a model, and, by implication, to have as the core of any DSGE model? What model should we teach at the start of the first-year graduate course?1I do not have the answer, but I have a few ideas. This is where my talk becomes even more tentative.My first distortion would remain nominal rigidities. As much as I try, I just cannot interpret macroeconomic evolutions without relying on nominal rigidities. Proof of their relevance is in the ability of central banks to maintain their desired nominal and real interest rates over long periods of time, or in the dramatically different behavior of real exchange rates under fixed and flexible exchange rate systems (Mussa 1986).My second distortion would be finite horizons. Not so much the finiteness that comes from death and the absence of operative bequest motives, but the finite horizon that comes from bounded rationality, from myopia, from the inability to think too far into the future.My third distortion would be in the role of own funds in spending decisions, whether it is capital for banks, or capital or collateral for firms or people. While it was only one of many distortions at play in the financial crisis, it can explain much of what happened, and how shocks affect financial intermediation.How I would actually put them together in a basic model is a much harder question, the difference between a dinner talk and a serious paper. We have off-the-shelf formalizations for nominal rigidities, for myopia, for capital constraints; for example Calvo for the first, Gabaix (2016) for the second, and Holmstrom and Tirole (1997) for the third.2 Each of them has its strengths and weaknesses, and whether they fit together conceptually is not obvious. (On this, I like the remarks by Cochrane [2016] on the potential misuse of the Gabaix formalization of myopia.) In thinking about how to combine these or other formalizations, I still struggle between keeping strictly to microfoundations or writing plausible characterizations more faithful to the empirical evidence, but more loosely connected to theory (this is the old discussion between the pros and cons of the IS-LM versus the NK model, and whether there is a middle way). But this is a separate set of methodological issues, which I shall leave aside here.The Low Real Safe Rate and Macroeconomic PolicyFor better or for worse, simple conceptual frames such as the NK model strongly shape and limit our thinking. With the above discussion in mind, let me take an example, namely the potential policy implications of the very low level of the policy rate needed to maintain output at potential, the so-called neutral rate.3Nearly all the discussion about policy implications has focused on monetary policy. In the one-distortion, one-instrument view of the economy, so long as the policy rate does not hit the zero lower bound, the low neutral rate does not pose a particular problem: the central bank should simply choose a policy rate consistent with this low neutral rate. At the zero lower bound (or, to the extent that we now know that policy rates can be at least slightly negative, the “effective lower bound”), the issue becomes the degree to which financial assets are imperfect substitutes, and how the policy tool kit must be extended to allow for purchases of specific assets. This is indeed how, for the most part, both the policy discussion and policy actions have unfolded.Figure 1 suggests, however, that the discussion should be more ambitious. It shows the evolution of the one-year real rate (constructed as the difference between the one-year Treasury rate minus the corresponding CBO forecast of inflation) and the real growth rate in the United States since 1980. The one-year real rate has indeed come down since the early 1980s. And, interestingly, it is now substantially below the growth rate, and expected to be below it for the foreseable future. This raises two interesting possibilities:The first is that the low policy rate reflects a low marginal product of capital, and that the US economy has become dynamically inefficient. This could be the case if, for example, consumers had finite horizons, either for physical reasons as in the overlapping generation model (should we call death a distortion?) or because of bounded rationality. If this were the case, the right policy tool would not be monetary policy, but rather policies aimed at decreasing saving. The right focus should be on fiscal policy. The right policy would be to increase public debt, and such a policy could be Pareto improving.As exciting as this possibility would be, it does not appear, however, that this is the right explanation for the low safe rate. What matters for dynamic inefficiency is not the relation between the safe rate and the growth rate, but between the marginal product of capital and the growth rate. And the empirical evidence on the marginal product is that it has remained much higher than the growth rate.This leads to the second hypothesis. That the difference between the marginal product and the safe rate has increased, leading to a low safe rate for a given marginal product. Put another way, it points to a large liquidity or risk premium. This in turn leads to a focus on the factors behind the premium, and the role of distortions in financial markets. Thinking of the premium as a risk premium, it takes us back to the equity premium puzzle identified by Mehra and Prescott (1985), and the various tentative resolutions to the puzzle. Thinking of the premium as a liquidity premium, it takes us to what is behind the demand for safe assets, along the lines of Caballero and Farhi (2014). It leads us to think about the role of financial regulations, and thus the role of regulatory policy. And if the high premium reflects, at least in part, distortions, the focus should then be on both fiscal and financial policies. If, for example, the safe rate is going to remain below the marginal product of capital, this implies that the government can borrow, never repay the debt, and still maintain a stable debt-to-GDP ratio. Should it do it? The fact that it can does not mean that it should. Or, to the extent that various distortions are behind the premium, should it instead remove them, even if this means a higher safer rate, and thus a higher cost of public borrowing?My intention here was not to give the answers, but to show how much a richer view of the relevant distortions leads to a richer discussion of policy. To repeat and conclude: We must move from a dominant “one distortion/one instrument” to a “many distortions/many instruments” view of the economy. In doing so, the way we think about the economy, and about the appropriate policies, will be much more fertile.EndnotesTalk, NBER Macroeconomics Annual Conference, April 2017. I thank Marty Eichenbaum, Jonathan Parker, and Adam Posen for comments. For acknowledgments, sources of research support, and disclosure of the author’s material financial relationships, if any, please see http://www.nber.org/chapters/c13955.ack.1. This may be a hopeless and misguided search. Maybe even the simplest characterization of fluctuations requires many more distortions. Maybe different distortions are important at different times. Maybe there is no simple model … I keep faith that there is.2. A fascinating question is why the Euler equation fails. One hypothesis is because of bounded rationality, for example, á la Gabaix. Another is because of borrowing constraints, for example, á la McKay, Nakamura, and Steinsson (2016). The answer is probably both. Interestingly, both lead, at least to a close approximation, to a similar modified Euler equation.3. After giving the talk, I was made aware of an article by Davig and Gurkaynak (2015) that has a closely related theme.ReferencesBernanke, Ben, and Mark Gertler. 1989. “Agency Costs, Net Worth, and Business Fluctuations.” American Economic Review 79 (1): 14–31.First citation in articleGoogle ScholarBlanchard, Olivier. 2009. “The State of Macro.” Annual Review of Economics 1:209–28.First citation in articleCrossrefGoogle ScholarBlanchard, Olivier, and Jordi Gali. 2007. “Real Rigidities and the New Keynesian Model.” Journal of Money, Credit, and Banking 39 (1): 35–66.First citation in articleCrossrefGoogle ScholarCaballero, Ricardo, and Emmanuel Farhi. 2014. “The Safety Trap.” NBER Working Paper no. 19927, Cambridge, MA.First citation in articleGoogle ScholarClarida, Richard, Jordi Gali, and Mark Gertler. 1999. “The Science of Monetary Policy: A New Keynesian Perspective.” Journal of Economic Literature 37:1661–707.First citation in articleCrossrefGoogle ScholarCochrane, John. 2016. “Comments on a Behavioral New-Keynesian Model by Xavier Gabaix.” Working Paper, University of Chicago.First citation in articleGoogle ScholarDavig, Troy, and Refet Gurkaynak. 2015. “Is Optimal Monetary Policy Always Optimal?” International Journal of Central Banking 11 (S1): 353–82.First citation in articleGoogle ScholarDiamond, Douglas, and Philip Dybvig. 1983. “Bank Runs, Deposit Insurance, and Liquidity.” Journal of Political Economy 91 (3): 401–19.First citation in articleLinkGoogle ScholarErceg, Christopher, Dale Henderson, and Andrew Levin. 2000. “Optimal Monetary Policy with Staggered Wage and Price Contracts.” Journal of Monetary Economics 46 (2): 281–313.First citation in articleCrossrefGoogle ScholarFischer, Stanley. 1977. “Long-Term Contracts, Rational Expectations and the Optimal Money Supply Rule.” Journal of Political Economy 85:191–205.First citation in articleLinkGoogle ScholarGabaix, Xavier. 2016. “A Behavioral New Keynesian Model.” NBER Working Paper no. 22954, Cambridge, MA.First citation in articleGoogle ScholarGertler, Mark, and Nobuhiro Kiyotaki. 2013. “Banking, Liquidity, and Bank Runs in an Infinite-Horizon Economy.” NBER Working Paper no. 19129, Cambridge, MA.First citation in articleGoogle ScholarHolmström, Bengt, and Jean Tirole. 1997. “Financial Intermediation, Loanable Funds, and the Real Sector.” Quarterly Journal of Economics 112 (3): 663–91.First citation in articleCrossrefGoogle Scholar———. 1998. “Private and Public Supply of Liquidity.” Journal of Political Economy 106 (1): 1–40.First citation in articleLinkGoogle ScholarKydland, Finn, and Edward Prescott. 1982. “Time to Build and Aggregate Fluctuations.” Econometrica 50:1345–70.First citation in articleCrossrefGoogle ScholarLucas, Robert. 1973. “Some International Evidence on the Output-Inflation Trade-Off.” American Economic Review 63 (3): 326–34.First citation in articleGoogle ScholarMcKay, Alisdair, Emi Nakamura, and Jon Steinsson. 2016. “The Power of Forward Guidance Revisited.” American Economic Review 106 (10): 3133–58.First citation in articleCrossrefGoogle ScholarMehra, Rajnish, and Edward Prescott. 1985. “The Equity Premium.” Journal of Monetary Economics 15:145–61.First citation in articleCrossrefGoogle ScholarMussa, Michael. 1986. “Nominal Exchange Rate Regimes and the Behavior of Real Exchange Rates: Evidence and Implications.” Carnegie-Rochester Conference Series on Public Policy 25:117–214.First citation in articleCrossrefGoogle ScholarTaylor, John. 1980. “Aggregate Dynamics and Staggered Contracts.” Journal of Political Economy 88 (1): 1–24.First citation in articleLinkGoogle ScholarWoodford, Michael. 2003. Interest and Prices: Foundations of a Theory of Monetary Policy. Princeton, NJ: Princeton University Press.First citation in articleGoogle Scholar Previous article DetailsFiguresReferencesCited by NBER Macroeconomics Annual Volume 322017 Sponsored by the National Bureau of Economic Research (NBER) Article DOIhttps://doi.org/10.1086/696069 © 2018 by the National Bureau of Economic Research. All rights reserved.PDF download Crossref reports no articles citing this article.

  • Research Article
  • 10.1086/700898
Comment
  • Jan 1, 2019
  • NBER Macroeconomics Annual
  • Jennifer La’O

Comment

  • Research Article
  • 10.1086/669183
Comment
  • Jan 1, 2013
  • NBER Macroeconomics Annual
  • Ricardo Reis

Previous articleNext article FreeCommentRicardo ReisRicardo ReisColumbia University and NBER Search for more articles by this author Columbia University and NBERPDFPDF PLUSFull Text Add to favoritesDownload CitationTrack CitationsPermissionsReprints Share onFacebookTwitterLinked InRedditEmailQR Code SectionsMoreIntroductionThis is a rich and provocative paper, full of ideas and insights on how to use panel data on industries and sectors to test models of nominal rigidities and their implications for labor markets. Without doing proper justice to all that is in this paper, I would summarize its contribution as providing two empirical facts and an argument.The first fact is that expenditures on durables fall proportionately by more than expenditures on nondurables in a recession. Theoretically, expenditures on durables are a small fraction of the stock of the durable, a direct consequence of small rates of depreciation of the stocks. Therefore, if consumers want to keep the level of their stock of durables in line with the level of their consumption of nondurables, they have to proportionately decrease the expenditure on durables by more than the expenditure on nondurables. Empirically, it is well known that aggregate durable spending is more volatile than nondurable spending over the business cycle. The authors further show that there is a statistically significant positive correlation between the durability of a sector and the cyclicality of employment or expenditures in that sector.Second, the authors estimate the following regression equation:where Sit is the share of output paid to production labor in industry i at date t, lifespani is the average years of duration of a good in sector i, Yt is aggregate output, αt are year dummies, and εit are errors. They find that more durable goods have labor shares that are more sensitive to the business cycle: β > 0.To get a different perspective on this result, consider a world with two goods, one that is nondurable (c) and so has a life span of 1, while the other is a durable (x) with a life span of N years. Using small letters to denote logs of capital letters, the regression equation can be written simply as:Therefore, another way to state the authors’ finding is that the relative labor share of durables is procyclical.How does one go from these two empirical observations to their conclusion that “We find evidence in support of Keynesian labor demand”? The authors’ argument goes as follows. A crucial idea of Keynesian economics is that prices are sticky and production is determined by demand. Therefore, when we enter a recession and demand for firms’ goods falls, instead of cutting prices, they cut production and fire workers. By reducing their workforce, firms will be increasing the marginal product of labor relative to wages, which serves to lower the marginal cost of production. Therefore, with sticky prices and lower marginal costs, the markup rises in recessions. Now, in durable-goods sectors, the first fact established by the authors is that demand falls by more in a recession. Therefore, this mechanism will be stronger in durable-goods sectors, so their markup will rise by more in a recession. Finally, under some assumptions on production that are satisfied in many macro models, the labor share is a measure of the inverse of the markup. Putting it all together, Keynesian models would predict that the labor share is more procyclical in durable goods, matching the authors’ second empirical finding.Working More Slowly through the ArgumentThere are many steps in this argument. To understand it better, I use a simple model of durables and sticky prices that relies on three pillars.The Relative Expenditure on DurablesI start with the demand side of the economy. There is a representative agent that solves the problem:The first line shows the intertemporal preferences, separable in the consumption of nondurables (Ct), durables (Dt), and hours worked (Lt). Agents spend resources to buy each of the two goods, and receive labor income in exchange for their work. The second line shows their budget constraint, where Bt are bonds they hold as savings, and Πt are profits received from firms. Finally, in the third line is a standard geometricdepreciation model of the law of motion for durables, where δ is the depreciation rate, and Xt is the expenditure on durables.The first-order condition with respect to expenditure on durables is:where γt is the nominal marginal utility of income (the Lagrange multiplier on the budget constraint). I will focus on the case where the good is minimally durable; that is, where δ is very close to 1. The results that follow would become stronger as δ becomes smaller. When δ is close to 1, the second term on the right-hand side of this equation is approximately zero. Combining it with the optimality condition with respect to nondurable consumption gives the relative demand for durables:where small letters denote logs of variables.Log-linearizing the law of motion for durables around a steady-state, we get:Again, with the assumption that δ is close to 1, this approximation is very close to being exact. But, as long as δ < 1, the investment on durables will fluctuate by more than the stock of durables.Combining these two equations, we obtain the key relation for the relative spending on durables:We can see already how this standard model of the demand for durables can go a long way toward matching the first empirical finding of the authors. If the relative price of durables does not change much during the business cycle, then expansions in total consumption must come with a larger increase in the spending on durables than the increase in spending on nondurables.The Relative Production of DurablesNext, I turn to the supply side. For both goods, I assume that output results from combining capital and labor in a Cobb–Douglas production function with common labor exponent α. In the very short run, it is reasonable to assume that capital is fixed in each sector and so I omit it from the expressions. The log-linear version of the production functions then are:where lct and lxt are the amount of labor used to produce nondurables and durables, respectively. These two equations give the link between the two sides of the first empirical fact by the authors: the relative cyclicality on expenditures across the two sectors will be mimicked by the relative cyclicality of employment.The markup in a sector is the ratio of the price of its good to the marginal cost of producing it. In turn, since labor is the only variable input, marginal cost equals the wage divided by the marginal product of labor. Using the Cobb–Douglas production function, the log markups in the two sectors are:The second equality uses the definition of the log of the labor share in each sector. This model can therefore also capture the premise, in the authors’ work, that labor shares are inversely proportional to markups.Flexible and Rigid PricesFinally, given supply and demand, I now discuss how prices are set and markets clear. I start with two extreme cases, to contrast classical models of flexible prices and Keynesian models of sticky prices. In one extreme, prices are flexible and desired markups are constant. Subtracting equation (6) from equation (5), the relative price of durables iswhere the second equality uses equations (3) and (4). This equation shows how firms pick employment based on the price of their goods. Combining this result with equation (2), linking the demand for goods to their relative price, we end up with:Therefore, with flexible prices, employment in the durables sector is more volatile than in the nondurables sector. This is consistent with the authors’ first finding, as long as α < 1, so there are increasing marginal costs. The second finding cannot be explained since the labor share is constant.In the other extreme, prices are fully rigid, so pxt = pct. Similar steps to the ones described in the previous paragraph show that:With rigid prices, employment in the durables sector is again more volatile than in the nondurables sector, fitting the first fact. Moreover, it is easy to verify that relative employment of durables is more cyclical in the rigid case than with flexible prices.As for the labor share:Compare equations (7) and (8) with the regression equation (1). The rigid-price model can perfectly account for both of the empirical findings. When total output goes up in the economy, employment in both sectors rises, more so in the durables sector, according to equation (7). The relative labor share will, according to equation (8), increase during the boom, precisely as in the data. This is the basis of the authors’ conclusion: the rigid-price Keynesian model can fit the facts, whereas a flexible-price classical model cannot.We can extend the argument to the more plausible case where prices are sticky, but not fully rigid. Combining equations (2) through (6), we get:Therefore, for a given size of fluctuations in total output, captured by lxt, a procyclical relative employment is associated with a procyclical relative labor share. The regression estimated by the authors seems to confirm the Keynesian model of labor demand. But is this intuition more general than the previous simple model?A. Digression: The Cyclicality of MarkupsNote that the authors did not estimate the regression:That is, their regression equation did not include a term in output, and they never estimated a coefficient like η, which would capture the cyclicality of the labor share in the sector. Therefore, their regression has nothing to say on whether markups are procyclical or countercyclical. The authors’ results are consistent with markups for durables being more countercyclical than markups for nondurables, but they are also consistent with durable markups being less procyclical than nondurables markups.There is a large literature on the cyclicality of markups, including Bils (1987), Rotemberg and Woodford (1992, 1999), Basu and Fernald (1997), Hall (2009), and Nekarda and Ramey (2010). The main obstacle this literature has faced is that Keynesian models predict that markups are countercyclical in response to monetary shocks, since prices do not move but marginal costs rise with output, but markups are procyclical in response to technology shocks that lower marginal costs. To test the models, one needs a measure or an instrument to isolate one type of shock.This is not so in the authors’ regression. In the theory in the previous section, I derived the authors’ prediction without ever having to state what aggregate shock drives the business cycle. Employment in the durables sector was more volatile than nondurables employment and the relative labor share of durables was procyclical. This holds always, via the reduced-form relations implied by the theory, and not only in the response or partial derivatives of these variables with respect to some shock. The authors’ approach is commendable because by focusing on relative markups, they sidestepped the main obstacle the literature had faced so far.Sticky Prices Do Not Imply a Countercyclical Relative MarkupWhile the previous derivations suggest that Keynesian labor demand and sticky prices may explain the facts, they do not show that it must be so. This would only be the case if, first, sticky-price models always predicted a countercyclical relative markup, and second, if flexible-price models were never able to do so.Starting with the first premise, I simulated the model of Barsky, House, and Kimball (2007) to investigate it. The household chooses consumption and labor supply exactly as in the second section, but now it also allocates total expenditure across varieties of the two types of consumption goods, according to a Dixit–Stiglitz aggregator with parameter ν. The firms still operate identical Cobb–Douglas production functions, but there is now a continuum of them, of measure 1 in each sector, and operating under monopolistic competition. Capital is still fixed on aggregate and in each sector, but now can be reallocated across firms within a sector at no cost. Finally, the firms face nominal rigidities à la Calvo, with θx and θc giving the share of durables and nondurables firms, respectively, that do not adjust their price every period.I set the parameter values in a standard way, described in table C1. Still following Barsky, House, and Kimball (2007), I assume monetary policy sets an exogenous process for nominal GDP, which follows a random walk, and I solve the model by log-linearizing around the nonstochastic steady state. Figure C1 shows the impulse response of several variables to a 1 percent shock at date 0 when prices adjust on average every four quarters in both sectors. The model is consistent with the authors’ facts: durables expenditure is more cyclical than nondurables expenditure, and the relative durables markup is countercyclical. Moreover, the absolute markups are also countercyclical in both sectors.Table C1. CalibrationView Table ImageFig. C1. Impulse responses to a 1 percent monetary shock with qc = qx = 0.75View Large ImageDownload PowerPointFigure C2 further confirms the success of the model. It shows the date 0 response for any value of the price rigidity parameter, but keeping it the same for both sectors. In all the cases, the relative output of durables is procyclical while the relative markup is countercyclical.Fig. C2. Period-0 impact of monetary shock when qc = qxView Large ImageDownload PowerPointNevertheless, these positive results are not robust. In figure C3, I perform the same calculation but now set θc = 1.5θx, so that nondurables prices adjust less often than durables. Now, the cyclicality of the relative expenditure on durables depends on the frequency of price adjustment. If both sectors adjust prices on an average of every six months, then the model predicts the opposite of the first empirical finding. Moreover, the relative markup of durables is procyclical for all parameters, so the second empirical finding is also at odds with this sticky-price model.Fig. C3. Period-0 impact of monetary shock when qc = 1.5 qxView Large ImageDownload PowerPointFigure C4 instead varies θc between 0.01 to 0.99 with equal spaces, while varying θx between 0.01 and 0.75, also with equal spaces. Therefore, at the left of the diagram, durables and nondurables adjust prices equally very frequently, and as we move to the right, nondurables prices are progressively more sticky than durables. Now, relative expenditure on durables is procyclical always in line with the first finding. Yet, the relative markup on durables is procyclical if the economy is very rigid, but countercyclical if prices are very flexible.1Fig. C4. Period-0 impact of monetary shock with asymmetric price stickinessView Large ImageDownload PowerPointTherefore, the authors’ findings are useful and informative, but they are not tests of the Keynesian model. Figures C1 through C4 show that a countercyclical relative markup of durables is not a fundamental property of a new Keynesian model. It is not even robustly associated with procyclical relative employment or expenditure on durables. Moreover, price stickiness can matter in a nonmonotonic way, and figure C4 gives an example where prices being closer to being flexible is actually more likely to generate a countercyclical relative markup on durables, while more rigid prices make it more likely that the relative markup moves in the opposite way to the authors’ empirical findings. I conclude from these that the empirical results in this paper do not confirm or reject the sticky-price model.Flexible Prices Are Not Inconsistent with Countercyclical Relative MarkupsOh (2012) proposed a tractable and insightful model of durables when there is a secondhand market. In his model, the stock of durables evolves according to:If the share of durables that is sold, st, equals zero, then this is just the standard law of motion for durables that we saw in the second section. But, if agents can sell their durables after depreciation, this allows them to lower their stock, and get a price Put in returns. Net spending on durables then equals the amount paid for new durables D tN times the price Pdt minus the revenues from selling old durables.Oh (2012) further assumes that depreciation accelerates at a quasigeometric rate. In the first period that a durable is used, the depreciation rate is ρδ, with ρ < 1, whereas in all subsequent periods the depreciation rate rises to δ. This formulation implies that selling used durables and buying new ones lowers the depreciation rate of the overall stock, capturing some of the benefit from replacing old with new.Turning to the supply of durables goods, there is a firm producing new goods, and a continuum of firms buying and refurbishing old durables, which are then sold in the same market as the new ones. Oh (2012) assumes these firms play a sequential oligopoly game, where first, the secondhand retailers choose whether to enter the market; second, the new durable firm sets the price for the good; and third, the entrants pick how much to supply. The new-durables firm plays the role of a dominant leader, whereas the secondhand retailers are a pricetaking competitive fringe.Working by backwards-induction, given that the secondhand retailers will supply the amount Mit of the durable variety i, the new-goods leader chooses the price Pidt to solve:The residual demand from the secondhand firms then leads to a countercyclical desired markup. When output is booming and the secondhand market is producing a great amount, the residual elasticity of demand for new goods is smaller, so the desired markup is smaller as well. This argument applies to durable goods only. Therefore, this model, which has flexible prices, generates a countercyclical relative markup for durables with fixed capital, fitting the findings of the authors. There is no price rigidity, but markups move nonetheless because changes in activity in the secondhand market alter the competitive pressure on the monopolist new-goods firm.2ConclusionIn this discussion I focused on two of the many facts that Bils, Klenow, and Malin brought to the table: relative employment and relative labor share of durables are procyclical. These and the other facts in this paper should guide research in the years to come. More generally, using cross-section characteristics, like durability, to infer different cyclicality of industries over time is an insight that promises to yield many more interesting findings.However, I expressed some skepticism that these facts can provide a test that accepts or rejects the broad class of Keynesian models of nominal rigidities. I showed that some calibrations of a standard model of durables with sticky prices could produce the opposite of what the authors find in the data, while a simple model of durables with a secondhand market and flexible prices is consistent with the facts. This does not take away from the main accomplishment of the authors: to convincingly show that, as previously argued by Barsky, House, and Kimball (2007), the durability of goods has crucial implications for models of goods’ pricing.EndnotesFor acknowledgments, sources of research support, and disclosure of the author’s material financial relationships, if any, please see http://www.nber.org/chapters/c12757.ack.1Barsky, House, and Kimball (2007) argue that durables prices are less sticky than nondurables, and even model them as perfectly flexible. The authors instead argue that the two sectors are equally sticky. The in-between that I consider in figures C3 and C4 is hard to reject in the data.2Parker (1997) provides another flexible-price model where the relative markup for durables is countercyclical because buyers can time their purchases of durables. Note that to fit the facts, it is not enough to generate countercyclical desired markups, which many models are able to deliver. The models must predict that relative markups for durables are countercyclical.ReferencesBarsky, R. B., C. L. House, and M. S. Kimball. 2007. “Sticky-Price Models and Durable Goods.” American Economic Review 97 (3): 984–98.First citation in articleCrossrefGoogle ScholarBasu, S., and J. G. Fernald. 1997. “Returns to Scale in US Production: Estimates and Implications.” Journal of Political Economy 105 (2): 249–83.First citation in articleLinkGoogle ScholarBils, M. 1987. “The Cyclical Behavior of Marginal Cost and Price.” American Economic Review 77 (5): 838–55.First citation in articleGoogle ScholarHall, R. E. 2009. “By How Much Does GDP Rise If the Government Buys More Output?” Brookings Papers on Economic Activity 2:183–231.First citation in articleCrossrefGoogle ScholarNekarda, C. J., and V. A. Ramey. 2010. “The Cyclical Behavior of the Price-Cost Markup.” University of California, San Diego, Manuscript.First citation in articleGoogle ScholarOh, H. 2012. “The Role of Durables Replacement and Second-Hand Markets in a Business-Cycle Model.” Columbia University, Manuscript.First citation in articleGoogle ScholarParker, J. A. 1997. “The Timing of Purchases, Market Power, and Economic Fluctuations.” Social Science Research Insitute Working Paper 9724.First citation in articleGoogle ScholarRotemberg, J. J., and M. Woodford. 1992. “Oligopolistic Pricing and the Effects of Aggregate Demand on Economic Activity.” Journal of Political Economy 100 (6): 1153–207.First citation in articleLinkGoogle Scholar———. 1999. “The Cyclical Behavior of Prices and Costs.” In Handbook of Macroeconomics, vol. 1, edited by J. B. Taylor and M. Woodford, chapter 16, 1051– 135. North Holland: Elsevier.First citation in articleGoogle Scholar Previous articleNext article DetailsFiguresReferencesCited by NBER Macroeconomics Annual Volume 272012 Sponsored by the National Bureau of Economic Research (NBER) Article DOIhttps://doi.org/10.1086/669183 Views: 576Total views on this site © 2013 by The National Bureau of Economic Research. All rights reserved.PDF download Crossref reports no articles citing this article.

  • Research Article
  • 10.1086/657549
Discussion
  • Jan 1, 2011
  • NBER Macroeconomics Annual

Discussion

  • Research Article
  • 10.1086/690248
Comment
  • Jan 1, 2017
  • NBER Macroeconomics Annual
  • Harald Uhlig

Comment

  • Single Report
  • 10.5281/zenodo.1400168
A New Keynesian Model with Wealth in the Utility Function
  • Aug 20, 2018
  • Zenodo (CERN European Organization for Nuclear Research)
  • Pascal Michaillat + 1 more

This paper extends the textbook New Keynesian model by introducing wealth, in the form of government bonds, in households' utility function. As bonds are in zero net supply, the IS curve imposes that output is decreasing in the real interest rate---as in the old IS-LM model. In contrast, the textbook model's IS curve imposes that the real rate is constant, equal to the time discount factor. As a result, when price rigidity and marginal utility of wealth are sufficient, our extended model's equilibrium has a unique steady state and is globally determinate, whether monetary policy is active, passive, or an interest-rate peg. This property greatly simplifies the analysis of the zero lower bound. Furthermore several pathologies of the textbook model at the zero lower bound---such as the forward-guidance puzzle---disappear.

  • Research Article
  • Cite Count Icon 1
  • 10.1086/690245
Crises in Economic Thought, Secular Stagnation, and Future Economic Research
  • Jan 1, 2017
  • NBER Macroeconomics Annual
  • Lawrence Summers

Crises in Economic Thought, Secular Stagnation, and Future Economic Research

  • Research Article
  • Cite Count Icon 9
  • 10.1016/j.jmoneco.2011.05.011
Optimal disinflation in new Keynesian models
  • Apr 1, 2011
  • Journal of Monetary Economics
  • Marcus Hagedorn

Optimal disinflation in new Keynesian models

  • Research Article
  • 10.1086/594136
Comment
  • Jan 1, 2008
  • NBER Macroeconomics Annual
  • Bennett T Mccallum

Previous articleNext article FreeCommentBennett T. McCallumBennett T. McCallumCarnegie Mellon University and NBER Search for more articles by this author Carnegie Mellon University and NBERPDFPDF PLUSFull Text Add to favoritesDownload CitationTrack CitationsPermissionsReprints Share onFacebookTwitterLinked InRedditEmailQR Code SectionsMoreI. IntroductionThis is an interesting and challenging paper, in which Atkeson and Kehoe put forth a very strong critique of current mainstream monetary policy analysis. Monetary economists have, of course, been rather pleased with the development of their subject over the past 10–15 years, current U.S. policy difficulties notwithstanding. Indeed, the tone of a prominent recent expository paper by my colleague, Marvin Goodfriend, is somewhat triumphal in spirit.1 The spirit of the Atkeson and Kehoe paper, by contrast, is conveyed by a recent publication of theirs, together with coauthor Fernando Alvarez, which bears the title “If Exchange Rates Are Random Walks, Then Almost Everything We Say about Monetary Policy Is Wrong” (Alvarez, Atkeson, and Kehoe 2007). That paper focuses on exchange rate failures, whereas the current one stresses the term structure of interest rates, but the line of argument is basically the same.The title of the 2007 paper leads me rather naturally to ask myself what it is that I would say in answer to the implied question, “What important things do monetary economists really know—or at least believe—about monetary policy?” My own answer to that question would go along the following lines: (i) We believe that if the monetary authority keeps monetary policy expansionary for a substantial length of time, the main effect will be to generate a higher inflation rate than would have prevailed otherwise, with little or no overall effect on aggregate production and employment. (ii) Nominal interest rates will be higher, also, with real rates being affected very little. (iii) If, however, the monetary authority changes policy unexpectedly and abruptly in an expansionary direction, there will most likely be an expansion in aggregate output and employment—but it will be only temporary. (iv) If these changes are in the direction of tighter policy, the signs of the above‐mentioned effects will be reversed. (v) In particular, the monetary authority has the power to generate a recession, in which output and then the inflation rate will fall. (vi) The precise nature of the mechanism that generates the real effects of monetary policy changes of this type is not very well understood. Then, if my questioner had not wandered away in boredom, I would want to add something like the following: (vii) The foregoing points refer to an expansionary or contractionary monetary policy stance—loose or tight—but how is this measured? Well, a sustained high growth rate of the stock of base money will (under most institutional arrangements) be expansionary, but matters are a little less clear‐cut when the central bank actually carries out its policy by manipulating overnight interest rates. Nevertheless, there are ways in which we can characterize tighter versus looser policy in terms of interest rate rules by reference to the implied target inflation rate, the strength of responses to deviations from target, and so forth.Now, I suspect that Atkeson and Kehoe probably do not disagree with most of these statements as to what monetary economists know (or believe), even on a substantive basis.2 But their title of the current paper, as distinct from the 2007 item, refers to a need for a new approach to monetary policy analysis. So let us turn to a consideration of what today’s mainstream approach is. As it happens there is a short statement of that type, in a paper of mine, that gives the following description. The approach is one in which “the researcher specifies a quantitative macroeconomic model that is intended to be structural (invariant to policy changes) and consistent with both theory and data. Then, by stochastic simulation or analytical means, he determines how crucial variables (such as inflation and the output gap) behave on average under various alternative policy rules. Usually, rational expectations (RE) is assumed in both stages. Evaluation of the different outcomes can be accomplished by means of an optimal control exercise, or by reference to an explicit loss function, or left to the judgment (i.e., loss function) of the implied policymaker” (McCallum 2001, 258). Here, too, I doubt that Atkeson and Kehoe have any major disagreement with this general approach. What they do disagree with, if I understand at all, is the model that is typically used in recent work and taken to be structural.3In a sense my last statement could be regarded as merely quibbling over their title. But the point seems to be one of some importance: if Atkeson and Kehoe can generate an optimizing model that incorporates reliable, quantitative estimates reflecting time‐varying “risk” (i.e., state‐dependent variances and covariances) and endogenously explains inflation and output fluctuations, then monetary economists would presumably be happy to incorporate such features in their models—and would not consider this to reflect any basically new approach. Be that as it may, in what follows I will briefly review their featured empirical regularities, discuss issues concerning their suggested modeling strategy, and provide a brief conclusion.1See “How the World Achieved Consensus on Monetary Policy” (Goodfriend 2007).2They would probably grumble, justifiably, about the vagueness of point vii.3McCallum (2001, 258) goes on to say: “There is also considerable agreement about the general, broad structure of the macroeconomic model to be used.” Atkeson and Kehoe clearly would not share in this agreement.II. Empirical RegularitiesAtkeson and Kehoe begin, in Section I, with “four key regularities regarding the dynamics of interest rates and risk that we use to guide our construction” of a model and its pricing kernel. The first two pertain to a principal components analysis of a collection of interest rates, specifically, a 3‐month T‐bill rate and zero‐coupon yields on U.S. Treasury securities with k‐year maturities for $$k=1,$$ 2, …, 13. Time series observations are monthly over 1946.12–2007.12. The first regularity is that “the first principal component accounts for over 90% of the variance of the short rate [i.e., the 3‐month rate].” The second regularity is that “the second principal component is very similar to the yield spread between the short rate and the long [i.e., 13‐year] rate.” Having demonstrated these facts—and also that the first component is correlated even more strongly with the long rate—the authors henceforth use just the short and long rates.More substantively (and more questionably), the third and fourth regularities pertain to expected excess returns in the context of term structure and international exchange rate contexts. Specifically, movements in yield spreads and exchange rate premia are “associated with movements in risk.” The way in which these regularities might be regarded by some readers as questionable is that, in many studies, “risk” is operationally the name that is given to differentials in expected returns that the analyst’s model is not able to explain.Later in the paper, in Section V.A, Atkeson and Kehoe plot short‐rate and long‐rate time series for the United States over an extended period from 1836 through 2007. In addition, they include analogous plots for the United Kingdom, France, Germany, and the Netherlands. In all of these, the fluctuations of the long rate represent “a much smaller fraction of overall fluctuations in the short rate than they are in the postwar period.” Thus, they state: “A central question in the analysis of monetary policy at the secular level then is, What institutional changes led to this pattern?” In the preliminary version of this comment, I responded to a more pointed and strongly emphasized version of this query by stating that, to me, it is no surprise that expectations of future interest rates became unanchored during the post–World War II period, because, to again quote myself,[the] collapse of the Bretton Woods system created, for the first time in history, a situation in which the world’s leading central banks were responsible for conducting monetary policy without an externally imposed monetary standard (often termed a “nominal anchor”). Previously, central banks had normally operated under the constraint of some metallic standard (e.g., a gold or silver standard), with wartime departures being understood to be temporary, i.e., of limited duration. Some readers might not think of the Bretton Woods system as one incorporating a metallic standard, but by design it certainly was, since the values of all other currencies were pegged to the U.S. dollar and the latter was pegged to gold at $35 per ounce. (McCallum 1999, 175–76)All in all, it seems that there is no difficulty in understanding why an altered monetary policy regime generated different expectations regarding inflation and therefore future short interest rates in the post–World War II era. The variability in long rates during the 1960s developed as market participants began to see that the United States was not going to be bound by its commitment to maintain the $35 per ounce price of gold. Then the variability jumps up around the time of the Bretton Woods collapse in 1971—see Atkeson and Kehoe’s figures 6A–6E—and continues to rise into the Volcker disinflation that was painful (with extremely high nominal interest rates) but that ultimately succeeded in restoring some semblance of a nominal anchor.What about the return to stability that may have occurred around 1990? That year is, of course, the year in which the first central bank (New Zealand) officially adopted a monetary policy regime of “inflation targeting” (IT). At that time, this was taken to mean a policy whose only objective was a low and stable inflation rate. Since then, the IT term has come to be applied to regimes that give more weight to output/employment stabilization, but most monetary economists understand it as continuing to emphasize, as the primary goal, inflation control. So again the timing is about right for the possible recovery of anchored expectations that the first empirical regularity is said to reflect.To this general line of argument, Atkeson and Kehoe object: “But this answer is, at best, superficial. In the prewar era, countries chose to be on the gold standard most of the time and chose to leave it when it suited their purposes. Thus, the relevant questions are, rather, What deeper forces led agents to have confidence that their governments would choose stable policy over the long term? And what forces led them to lose this confidence after World War II? Only if we can quantitatively account for this history can we give advice on how to avoid another great inflation.”In this regard it must be said that I consider an explanation of the evolution of beliefs regarding the monetary standard, held by citizens of the United States, Great Britain, Germany, and so forth, to be somewhat beyond the scope of monetary policy analysts. To think about this issue, one must recognize that historically “the gold standard” required not just that the monetary authority would stand ready to exchange gold and currency at a specified rate but also that this rate should be unchanged “forever.” That arrangement made it such that severe inflation would not occur—even the major historical gold discoveries did not generate sustained inflation on the order of 10% per year—but it did generate more cyclical instability of real variables than we have had in the postwar era. Could policy of that type win popular support in today’s environment in the United States? If not, which would be my answer, then we need an entire unified social science to provide an explanation at “a deeper level.” And such an explanation—which would need to emphasize enormous developments in the media, extensions of suffrage, evolution of religious beliefs, attitudes toward the role of government, and so on—would not be of much help to central bankers. Let us turn then to monetary policy analysis considered more narrowly.III. Basic AnalysisThe heart of Atkeson and Kehoe’s paper is a recommended response to the third and fourth of the regularities mentioned above, that is, that measured excess returns on multiperiod bonds fluctuate strongly with yield spreads for bonds of different maturities and for international exchange rates. These regularities are translated by Atkeson and Kehoe into an argument that the consumption Euler equation, some version of which (often termed an expectational IS equation) is one basic ingredient of current macro‐monetary models, performs very poorly empirically. This is, of course, true for the simplest versions, but that problem has been widely recognized by monetary economists. A nice overview of empirical weaknesses of so‐called New Keynesian models was provided some years ago in a working paper by Richard Dennis (2003), which is briefly and nontechnically summarized in Dennis (2004). (The weaknesses discussed there relate to the Calvo‐style price adjustment relation, as well as the consumption Euler equation.) Dennis distinguishes between the bare‐bones “canonical model” and a “hybrid” version that adds habit formation in consumption behavior to the basic consumption‐saving relationship and also adds a somewhat dubious dependence on lagged inflation to the basic Calvo price adjustment relation. He recognizes, following Estrella and Fuhrer (2002), that “the problem with the canonical model is that the behavior of output, consumption, prices, and interest rates suggested by the model are fundamentally at odds with observed data” (Dennis 2004, 1). The hybrid model performs better, in terms of matching quarterly data, but “there are a number of areas where the hybrid model’s responses differ importantly from” impulse responses of an identified vector autoregression (VAR; Dennis 2004, 3).The point here is that monetary economists are quite aware that current models, even with elaborations of the type utilized by Christiano, Eichenbaum, and Evans (2005) or Smets and Wouters (2007), have empirical weaknesses, and they have been active in trying to eliminate these problems by improved specification. One pertinent and recent example concerns the discouraging results reported by Canzoneri, Cumby, and Diba (2007), that is, that inclusion of habit formation in consumption behavior unrealistically increases the variability of interest rates.4 Subsequent results by Collard and Dellas (2007) indicate, however, that this deterioration obtains when the household utility function is taken to be additively separable in consumption and leisure. If instead consumption and leisure enter the function in a Cobb‐Douglas manner, then inclusion of habit results in an improved—not worsened—match of the model’s interest rate variability to that of the data.I might also remark that Atkeson and Kehoe’s way of considering the empirical failure of the Euler equation seems questionable. Specifically, they discuss the relationship in a manner that would be appropriate if the role of this equation were to explain movements in nominal interest rates of various maturities. In fact, however, the role of this equation in standard monetary policy models is to explain consumption in response to (real) interest rates and expected future consumption (and, in habit specifications, lagged consumption). No mention of the adequacy or inadequacy of the standard model’s properties with regard to consumption is provided.5Be that as it may, it is essential to consider the analytical heart of Atkeson and Kehoe's paper, which is their presentation of “a simple model of the pricing kernel that is consistent with these [observed] dynamics” pertaining to interest rates. For the one‐period nominal interest rate, it in their notation, the pricing kernel mt+1 is an unobservable random variable that is generated by a stochastic process such that the interest rate it can be determined by a relation of the form $$i_{t}=-\mathrm{log}\,E_{t}\mathrm{exp}\,( m_{t+1}) .$$ Assuming conditional lognormality, then, we have (1)it=−Emt+1−0.5Vartmt+1. Except for lognormality, the content of their model for it is then the specification of the stochastic process generating mt+1. They take it to be (2)−mt+1=δ+z1t+σ1ε1t+1=1−λ2/2z2t+z2t0.5λε2t+1+σ3ε3t+1, where $$\varepsilon _{1t},$$ $$\varepsilon _{2t},$$ and $$\varepsilon _{3t}$$ are independent, standard normal, white‐noise innovations and where (3)z1t+1=z1t+σ1ε1t+1. (4)z2t+1=1−φθ+φz2t+z2t0.5σ2ε2t+1. These processes are chosen with an eye to their implications for the term structure via the relation (5)1=Etexpmt+1+pt+1k−1, which characterizes an absence of arbitrage possibilities for k‐period bonds with prices, $$p^{k-1}_{t+1}$$. From these prices the analyst can calculate term structure measures.Finally, Atkeson and Kehoe calibrate the model by assuming that $$\lambda =\sqrt{2}$$, $$\varphi =0.99,$$ and $$\sigma _{2}=0\mathrm{.}\,017$$. This specification suffices, they report, to generate interest rates of different maturities such that the term structure features long and short rates that possess properties that have the general characteristics found in their exploration of monthly data for rates of various maturities in the U.S. data.How does this model compare in specification with the standard three‐equation framework used in recent years to model one‐period interest rates, consumption (and/or output), and inflation by Clarida, Gali, and Gertler (1999), McCallum (2001), Woodford (2003, 238–47), and dozens of other monetary economists? That framework, as is well known, consists of (i) a consumption Euler equation (aka expectational IS relation), (ii) a price adjustment relation (usually of the Calvo variety), and (iii) a monetary policy rule that specifies adjustments of the one‐period nominal policy rate it to its determinants, which include the steady state real interest rate, the central bank’s inflation target, departures of inflation from target, and departures of output from its natural (flexible price) rate. (The lagged rate it‐1 is often included as well to represent smoothing.) This framework implicitly adopts the expectations theory of the term structure, which is known to be inconsistent with the data. Notable examples of larger models that include more variables and equations but that have the same basic underlying logic are provided by Christiano et al. (2005) and Smets and Wouters (2007).One aspect of the comparison is that the Atkeson‐Kehoe model, since it pertains to an “endowment economy,” implicitly assumes that price level adjustments are complete within each period so that output is always equal to its (exogenous) natural rate, flexible price value. Only a degenerate version of the Calvo equation component of the standard model is therefore present. That removes one endogenous variable, output/consumption. For some purposes, a flexible price model can be useful for monetary policy principles, as in Woodford (2003, chap. 2). But Atkeson and Kehoe also treat inflation as exogenous. Thus, there is no possibility remaining for conducting monetary policy analysis, and it is not determined by central bank behavior. Those features are consistent with their expressed view that the central bank “simply responds to exogenous changes in real risk—specifically, to exogenous changes in the conditional variance of the real pricing kernel—with the aim of maintaining inflation close to a target level.” But this seems highly unsatisfactory. It is probably true that a substantial portion of the meeting‐to‐meeting variations in the federal funds rate in the United States represents adjustments that are responses to changes in real rates that are brought about by changes in tastes, technology, shocks from abroad, and even perhaps some random behavioral errors by private agents. In fact, this is implied by much of the analysis that represents today’s mainstream monetary policy analysis—see, for example, Woodford (2003, and But the modeling approach suggested by Atkeson and Kehoe that the its for a random that is it no in a no is provided that their model would do a of matching data on much less two variables as endogenous and by central bank by a policy rule for a variable, the model is not in for monetary et al. (2007) paper is by Atkeson and and Kehoe to believe that standard have Euler equations that include no term reflecting and Kehoe are to say that the Euler equation specification in many monetary models does not well empirically. In addition, their specification of stochastic processes for the and variables that yield a pricing kernel that term structure features that the data in important ways is and They in that models in which conditional variances of returns are variable provide an possibility for improved model specification. This is not of course, and does not of inflation and output as exogenous or to a model that leads to their highly about the nature of monetary policy in the United States (and, other and currency is a of the monetary policy that term structure that pricing with time‐varying risk premia in models along with endogenous price and monetary policy rules. Some leading examples are provided by and and et al. (2007), and These have beyond Atkeson and Kehoe in to models that the term structure regularities maintaining a framework for monetary policy analysis. the approach time‐varying conditional is not the only one of as the Collard and Dellas (2007) example In I by of the Atkeson and Kehoe critique of some features of today’s New Keynesian monetary policy models, but I their current to be in essential their of U.S. monetary policy to be and their critique of current monetary policy analysis to be a brief see Atkeson, and 2007. “If Exchange Rates Are Random Walks, Then Almost Everything We Say about Monetary Policy Is and in Cumby, and T. 2007. and of Monetary in Eichenbaum, and and the of a to Monetary of in Gali, and of Monetary A New Keynesian of in and 2007. and Monetary paper, of in Keynesian Empirical of in Keynesian and to the of in and of a of in and 2007. with of in and McCallum and the of of Monetary in 2007. “How the World Achieved Consensus on Monetary of in T. in Monetary Policy The of and of in of in Monetary Policy to and in 2007. and Monetary paper, of University of in and in and 2007. and in A in and of a of Monetary University in Previous articleNext article by NBER by the of on this by the of no articles this

  • Research Article
  • Cite Count Icon 1
  • 10.2139/ssrn.2332124
Environmental Policy and Macroeconomic Dynamics in a New Keynesian Model
  • Sep 27, 2013
  • SSRN Electronic Journal
  • Barbara Annicchiarico + 1 more

This paper studies the dynamic behaviour of an economy under different environmental policy regimes in a New Keynesian (NK) model with nominal and real uncertainty. We find the following results: (i) an emissions cap policy is likely to dampen macroeconomic fluctuations; (ii) staggered price adjustment alters significantly the performance of the environmental policy regime put in place, especially with an emissions intensity target; (iii) welfare tends to be higher with a tax on emissions when prices are sticky; (iv) the optimal policy response to inflation is found to be very strong as long as welfare is not affected by environmental quality and the environmental policy does not consist in an emissions cap.

  • Research Article
  • 10.5089/9781451851045.001.a001
Pricing Policies and Inflation Inertia
  • Apr 1, 2003
  • Michael Kumhof + 2 more

The paper proposes a monetary model with nominal rigidities that differs from the conventional New Keynesian model in that firms set pricing policies instead of price levels. In response to permanent or highly persistent monetary policy shocks this model generates the empirically observed slow (inertial) and prolonged (persistent) reaction of the inflation rate, and also the recession which typically accompanies moderate disinflations. The reason is that firms respond to such shocks mostly through a change in the long-run or inflation updating component of their pricing policies. With staggered pricing policies this takes time to be reflected in aggregate inflation. __________________ This paper was previously circulated under the title “Macroeconomic Dynamics under Inflation Inertia: An Optimizing Model”. The authors thank Ariel Burstein, Guillermo Calvo, Chris Erceg, Charles Goodhart, Andrew Levin and Zheng Liu for very helpful comments. A major part of this research was completed while Michael Kumhof visited the Research Departments of the IMF and the IDB. Their support is very gratefully acknowledged. E-mails: lcespede@bcentral.cl; kumhof@stanford.edu; eparrado@imf.org.

  • Research Article
  • 10.1086/663987
Editorial
  • Jan 1, 2012
  • NBER Macroeconomics Annual
  • Daron Acemoglu + 1 more

Editorial

  • Preprint Article
  • Cite Count Icon 1
  • 10.5167/uzh-52306
NOMINAL AND REAL INTEREST RATES DURING AN OPTIMAL DISINFLATION IN NEW KEYNESIAN MODELS 1
  • Dec 1, 2007
  • Marcus Hagedorn

Central bankers' conventional wisdom suggests that nominal interest rates should be raised to implement a lower inflation target. In contrast, I show that the standard New Keynesian monetary model predicts that nominal interest rates should bendecreased to attain this goal. Real interest rates, however, are virtually unchanged. These results also hold in recent vintages of New Keynesian models with sticky wages, price and wage indexation and habit formation in consumption.

  • Research Article
  • 10.1086/680631
Comment
  • Jan 1, 2015
  • NBER Macroeconomics Annual
  • Mark Gertler

Comment

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.