Low inflation and monetary policy in the euro area

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Inflation in the euro area has been falling since mid-2013, turned negative at the end of 2014 and remained below target thereafter. This paper employs a Bayesian VAR to quantify the contribution of a set of structural shocks, identified by means of sign restrictions, to inflation and economic activity. Shocks to oil supply do not tell the full story about the disinflation that started in 2013, as both aggregate demand and monetary policy shocks also played an important role. The lower bound to policy rates turned the European Central Bank (ECB) conventional monetary policy de facto contractionary. A country analysis confirms that the negative effects of oil supply and monetary policy shocks on inflation was widespread, albeit with different intensity across countries. The ECB unconventional measures since 2014 contributed to raising inflation and economic activity in all the countries. All in all, our analysis confirms the appropriateness of the ECB asset purchase programme.

Similar Papers
  • Research Article
  • Cite Count Icon 17
  • 10.2139/ssrn.2910938
Low Inflation and Monetary Policy in the Euro Area
  • Jan 1, 2017
  • SSRN Electronic Journal
  • Antonio Maria Conti + 2 more

Inflation in the euro area has been falling since mid-2013, turned negative at the end of 2014 and remained below target thereafter. This paper employs a Bayesian VAR to quantify the contribution of a set of structural shocks, identified by means of sign restrictions, to inflation and economic activity. Shocks to oil supply do not tell the full story about the disinflation that started in 2013, as both aggregate demand and monetary policy shocks also played an important role. The lower bound to policy rates turned the European Central Bank (ECB) conventional monetary policy de facto contractionary. A country analysis confirms that the negative effects of oil supply and monetary policy shocks on inflation was widespread, albeit with different intensity across countries. The ECB unconventional measures since 2014 contributed to raising inflation and economic activity in all the countries. All in all, our analysis confirms the appropriateness of the ECB asset purchase programme. JEL Classification: C32, E31, E32, E52

  • Preprint Article
  • 10.17169/refubium-28493
Disentangling the effects of multidimensional monetary policy on inflation and inflation expectations in the euro area
  • Nov 6, 2020
  • Catalina Martínez-Hernández

The European Central Bank (ECB) has adopted a mixture of conventional and unconventional tools in order to achieve its mandate of price stability in the current low-inflation, low-interest-rate scenario. This paper contributes to the existing literature by providing a taxonomy of the ECB's policy toolkit and by evaluating its implications on price stability and the anchoring of inflation expectations. I carry out my analysis based on a high-frequency identification and the estimation of a large Bayesian Vector Autoregression. I find evidence of re-anchored expectations as response to quantitative easing and forward guidance, i.e. forecasters revise their long-run expectations upwards. Consequently, inflation increases, which stresses the crucial role of expectations for the transmission of monetary policy.

  • Dissertation
  • 10.11588/heidok.00025813
An Empirical Analysis of Survey-Based Macroeconomic Forecasts and Uncertainty
  • Jan 1, 2019
  • Alexander Glas

Paper 1 (Chapter 2): We investigate the question of whether macroeconomic variables contain information about future stock volatility beyond that contained in past volatility. We show that forecasts of GDP/IP growth from the Federal Reserve's Survey of Professional Forecasters predict volatility in a cross-section of 49 industry portfolios. The expectation of higher growth rates is associated with lower stock volatility. Our results are in line with both counter-cyclical volatility in dividend news as well as in expected returns. Inflation forecasts predict higher or lower stock volatility depending on the state of the economy and the stance of monetary policy. Forecasts of higher unemployment rates are good news for stocks during expansions and go along with lower stock volatility. Our results hold in- as well as out-of-sample and pass various robustness checks. Paper 2 (Chapter 3): We analyze the covariates of average individual inflation uncertainty and the cross-sectional variance of point forecasts (`disagreement') based on data from the European Central Bank's Survey of Professional Forecasters. We empirically confirm the implication from a theoretical variance decomposition that disagreement is an incomplete approximation to overall uncertainty. Both measures are associated with macroeconomic conditions and indicators of monetary policy, but the relations differ qualitatively. In particular, average individual inflation uncertainty is higher during periods of expansionary monetary policy, whereas disagreement rises during contractionary periods. This implies that conclusions based on disagreement as a single indicator of ex-ante uncertainty are incomplete and potentially misleading. Paper 3 (Chapter 4): We analyze the relationship between forecaster disagreement and macroeconomic uncertainty in the Euro area using data from the European Central Bank's Survey of Professional Forecasters for the period 1999Q1-2018Q2. We find that disagreement is generally a poor proxy for uncertainty. However, the strength of this link varies with the employed dispersion statistic, the choice of either the point forecasts or the histogram means to calculate disagreement, the considered outcome variable and the forecast horizon. In contrast, distributional assumptions do not appear to be very influential. The relationship is weaker during economically turbulent periods when indicators of uncertainty are needed most. Accounting for the entry and exit of forecasters to and from the survey has little impact on the results. We also show that survey-based uncertainty is associated with overall policy uncertainty, whereas forecaster disagreement is more closely related to the fluctuations on financial markets. Paper 4 (Chapter 5): Although survey-based point predictions have been found to outperform successful forecasting models, corresponding variance forecasts are frequently diagnosed as heavily distorted. Forecasters who report inconspicuously low ex-ante variances often produce squared forecast errors that are much larger on average. In this paper, we document the novel stylized fact that this variance misalignment is related to the rounding behavior of survey participants. Discarding responses which are strongly rounded provides an easily implementable correction that i) can be carried out in real time, i.e., before outcomes are observed, and ii) delivers a significantly improved match between ex-ante and ex-post forecast variances. According to our estimates, uncertainty about inflation, output growth and unemployment in the U.S. and the Euro area is higher after correcting for the rounding effect. The increase in the share of non-rounded responses in recent years also helps to understand the trajectory of survey-based average uncertainty during the years after the financial and sovereign debt crisis. Our findings are in line with assertions from the previous literature regarding the connection between survey respondents' rounding behavior and their uncertainty about future macroeconomic outcomes.

  • Research Article
  • Cite Count Icon 34
  • 10.1086/669584
Taylor Rule Exchange Rate Forecasting during the Financial Crisis
  • Mar 1, 2013
  • NBER International Seminar on Macroeconomics
  • Tanya Molodtsova + 1 more

Previous articleNext article FreeTaylor Rule Exchange Rate Forecasting during the Financial CrisisTanya Molodtsova and David H. PapellTanya MolodtsovaEmory University Search for more articles by this author and David H. PapellUniversity of Houston Search for more articles by this author PDFPDF PLUSFull Text Add to favoritesDownload CitationTrack CitationsPermissionsReprints Share onFacebookTwitterLinked InRedditEmailQR Code SectionsMoreI. IntroductionThe past few years have seen a resurgence of academic interest in out-of-sample exchange rate predictability. Gourinchas and Rey (2007, using an external balance model); Engel, Mark, and West (2008, using monetary, Purchasing Power Parity [PPP], and Taylor rule models); and Molodtsova and Papell (2009, using a variety of Taylor rule models) all report successful results for their models vis-à-vis the random walk null. There has even been the first revisionist response. Rogoff and Stavrakeva (2008) criticize the three abovementioned papers for their reliance on the Clark and West (2006) statistic, arguing that it is not a minimum mean squared forecast error statistic.An important problem with these papers is that none of them use real-time data that was available to market participants.1 Unless real-time data is used, the "forecasts" incorporate information that was not available to market participants, and the results cannot be interpreted as successful out-of-sample forecasting. Faust, Rogers, and Wright (2003) initiated research on out-of-sample exchange rate forecasting with real-time data. Molodtsova, Nikolsko-Rzhevskyy, and Papell (2008) use real-time data to estimate Taylor rules for Germany and the United States and forecast the Deutsche mark/dollar exchange rate out-of-sample for 1989:Q1 to 1998:Q4. Molodtsova, Nikolsko-Rzhevskyy, and Papell (2011), henceforth MNP (2011), use real-time data to show that inflation and either the output gap or unemployment, variables which normally enter central banks' Taylor rules, can provide evidence of out-of-sample predictability for the US dollar/euro exchange rate from 1999 to 2007. Adrian, Etula, and Shin (2011) show that the growth of US dollar-denominated banking sector liabilities forecasts appreciations of the US dollar from 1997 to 2007, but their results break down in 2008 and 2009.Molodtsova and Papell (2009) conduct out-of-sample exchange rate forecasting with Taylor rule fundamentals, using the variables, including inflation rates and output gaps, that normally comprise Taylor rules. Engel, Mark, and West (2008) propose an alternative methodology for Taylor rule out-of-sample exchange rate forecasting. Using a Taylor rule with prespecified coefficients for the inflation differential, output gap differential, and real exchange rate, they construct the interest rate differential implied by the policy rule and use the resultant differential for exchange rate forecasting. We use a single equation version of their model, which we call the Taylor rule differentials model.2 Since there is no evidence that either the Fed or the European Central Bank (ECB) targets the exchange rate, we do not include the real exchange rate in the forecasting regression for either model.3Out-of-sample exchange rate forecasting with Taylor rule fundamentals received blogosphere, as well as academic, notice in 2008. On July 28 and September 9, Menzie Chinn posted on Econbrowser a discussion of in-sample estimates of one of the specifications used in an early version of MNP (2011).4 On August 17, he posted an article by Michael Rosenberg of Bloomberg, who discussed Taylor rule fundamentals as a foreign currency trading strategy. By December 22, however, optimism had turned to pessimism. Once interest rates hit the zero lower bound, they cannot be lowered further. With zero or near-zero interest rates for Japan and the United States, and predicted near-zero rates for the United Kingdom and the Euro Area, the prospects for Taylor rule exchange rate forecasting were bleak. A second theme of the post, however, was that there was nothing particularly promising on the horizon. Going back to the monetary model, even in a regime of quantitative easing, faced doubtful prospects for success.5The events of 2007 to 2009 focused the attention of economists on the importance of financial conditions. On August 9, 2007, the spread between the dollar London interbank offer rate (Libor) and the overnight index swap (OIS), an indicator of financial stress in the interbank loan market, jumped from 13 to 40 basis points on concerns that problems in the subprime mortgage market were spreading to the broader mortgage market.6 The spreads mostly fluctuated between 50 and 90 basis points until September 17, 2008, when they spiked following the announcement that Lehman Brothers had filed for bankruptcy, peaking on October 10 at over 350 basis points. Following the end of the panic phase of the financial crisis in October, 2008, the spread gradually returned to near precrisis levels in September 2009. The spread increased again, although not nearly as sharply, in mid-2010 and late 2011. The spreads are depicted in figure 1.Fig. 1. Credit spreads and financial stress indexes with their differentialsView Large ImageDownload PowerPointThe deteriorating financial situation in late 2007 and 2008 inspired several proposals for linking monetary policy to financial conditions. Mishkin (2008) argued that, when a financial disruption occurs, the Fed should cut interest rates to offset the negative effects of financial turmoil on aggregate economic activity. McCully and Toloui (2008) suggested that, because of tightened financial conditions, the Fed needed to lower the policy rate by 100 basis points in early February 2008 in order to keep the neutral rate constant. Meyer (2009) argued that the Taylor rule without considerations of financial conditions could not explain aggressive Fed policy in early 2008.Taylor (2008) proposed adjusting the systematic component of monetary policy by subtracting a smoothed version of the Libor-OIS spread from the interest rate target that would otherwise be determined by deviations of inflation and real GDP from their targets according to the Taylor rule. He argued that such an adjustment, which would have been about 50 basis points in late February 2008, would be a more transparent and predictable response to financial market stress than a purely discretionary adjustment.Curdia and Woodford (2010) modify the Taylor rule with an adjustment for changes in interest rate spreads. Using a dynamic stochastic general equilibrium (DSGE) model with credit frictions, they show that incorporating spreads can improve upon a standard Taylor rule, although the optimal size of the adjustment is smaller than proposed by Taylor and depends on the source of variation in the spreads.The spread between the euro interbank offer rate (Euribor) and the euro OIS also jumped in August 2007 and spiked in September and October 2008, although not by as much as the US spread. While the Euribor-OIS spread came down in September 2009, it did not return to its precrisis levels. During August and December 2010, the spread jumped to as high as 40 basis points and, in December 2011, reached a maximum of 100 basis points. The end-of-quarter Libor-OIS, Euribor- OIS, and the difference between the Libor-OIS and Euribor-OIS spreads are depicted in figure 1. After the gap between the two spreads narrowed in 2008:Q4, the spread turned against the Euro Area, reaching a maximum in 2011:Q3 and 2011:Q4 before narrowing in 2012:Q1.This paper investigates out-of-sample exchange rate forecasting during the financial crisis with Taylor rule-based models that incorporate indicators of financial stress. We use one-quarter-ahead forecasts and estimate models with core inflation and both the output gap and the unemployment gap for the Taylor rule fundamentals and Taylor rule differentials models.7 When the Libor-OIS/Euribor-OIS differential is included in the forecasting regression, we call the models spread-adjusted Taylor rule fundamentals and differentials models. According to these models, when the Libor-OIS spread increases, the Fed would be expected to either lower the interest rate or, if it had already attained the zero lower bound, engage in quantitative expansion, depreciating the dollar. When the Euribor-OIS spread increases, the ECB would be expected to react similarly, depreciating the euro. We therefore use the difference between the Libor-OIS and Euribor-OIS spreads in addition to the difference between the United States and Euro Area inflation rates and output gaps for out-of-sample forecasting of the dollar/euro exchange rate.Another widely used credit spread is the Ted spread, the three-month Libor/three-month Treasury spread for the United States and the three-month Euribor/three-month Treasury spread for the Euro Area. As shown in figure 1, the US Ted spread was generally higher than the Euro Area Ted spread until 2008 and the Ted spread differential was more variable than the Libor-OIS/Euribor-OIS differential. The Euro Area Ted spread spiked with the US Ted spread in 2008:Q3, and so the differential does not display a spike at the peak of the financial crisis. Subsequent to the financial crisis, the Ted spread differential is similar to the Libor-OIS/Euribor-OIS differential. It turns against the Euro Area in 2009, reaches a maximum in 2011:Q3 and 2011:Q4, and narrows in 2012:Q1. We use the difference between the US and Euro Area Ted spreads as an alternative indicator of financial stress.Financial Conditions Indexes (FCIs) that summarize information about the future state of the economy contained in a number of current financial variables have received considerable attention in recent years. Hatzius et al. (2010) show that FCIs outperform individual financial variables that are considered to be useful leading indicators in their ability to predict the growth of different measures of real economic activity. We therefore augment the Taylor rule by using the difference between the Bloomberg and Organization for Economic Cooperation and Development (OECD) FCIs for the United States and the Euro Area for out-of-sample forecasting of the dollar/euro exchange rate.8 The Bloomberg and OECD FCIs are depicted in figure 1 where, in contrast to the credit spreads, an increase represents an improvement in financial conditions. Financial conditions deteriorate sharply for both the United States and the Euro Area in late 2008, but turn in favor of the United States starting in 2009.Real-time data for the United States is available in vintages starting in 1966, with the data for each vintage going back to 1947. Real-time data for the Euro Area, however, is only available in vintages starting in 1999:Q4, with the data for each vintage going back to 1991:Q1. While the euro/dollar exchange rate is only available since the advent of the euro in 1999, "synthetic" rates are available since 1993. We use rolling regressions to forecast exchange rate changes starting in 1999:Q4, with 26 observations in each regression. Keeping the number of observations constant, we report results ending in 2007:Q1, with 30 forecasts, through 2012:Q1, with 50 forecasts. We report the ratio of the mean squared prediction errors (MSPE) of the linear and random walk models and the CW test statistic of Clark and West (2006).9The Taylor rule fundamentals model with the unemployment gap produces very strong results. The MSPE of the Taylor rule model is smaller than the MSPE of the random walk model and the random walk null can be rejected in favor of the Taylor rule model using the CW test at the 5 percent level for the initial set of forecasts ending in 2007:Q1. As the number of forecasts increases, the MSPE ratios decrease and the strength of the rejections increases, peaking at the 1 percent level in 2008:Q1. In the following quarter, 2008:Q2, the MSPE ratios start to rise and continue to increase through 2009:Q1 (although the rejections continue at the 5 percent level or higher). Starting in mid-2009, the MSPE ratios stabilize and the random walk can be rejected in favor of the Taylor rule model at the 5 percent significance level for all specifications between 2009:Q2 and 2012:Q1.The results for the other models are not as strong. For the Taylor rule differentials model with the output gap, the random walk null can be rejected at the 10 percent level or higher from 2007:Q1 to 2008:Q3 and 2009:Q2 to 2009:Q4, but not otherwise. For the Taylor rule fundamentals model with the output gap and the Taylor rule differentials model with the unemployment gap, the random walk null can only be rejected at the 10 percent level or higher from 2007:Q1 to 2008:Q2.A major innovation in this paper is to incorporate indicators of fi-nancial stress, measured by the difference between the Libor-OIS and Euribor-OIS spreads, the US and Euro Area Ted spreads, the US and Euro Area Bloomberg FCIs, and the US and Euro Area OECD FCIs, for out-of-sample exchange rate forecasting with Taylor rule models. The strongest results are again for the Taylor rule fundamentals model with the unemployment gap. Using the OECD FCI, the random walk null can be rejected in favor of the linear model alternative at the 5 percent level for all but one set of forecasts, and at the 10 percent level for the remaining forecast. Using the three other indicators, the null can be rejected at the 10 percent level or higher for over half of the forecasts, with the strongest results for the forecasts ending between 2007 and 2009. As with the original Taylor rule model, the augmented Taylor rule differentials model with the output gap is the next most successful, with the random walk null rejected at the 10 percent level or higher for all forecasts using the OECD FCI and at the 10 percent level or higher for over half of the forecasts with the three other indicators. The rejections for the other two augmented models are concentrated in 2007 and 2008.We proceed to compare the original and augmented models for the two most successful specifications. For the Taylor rule fundamentals models with the unemployment gap, the original model null can be rejected in favor of the augmented model alternative at the 5 percent level for virtually every set of forecasts ending between 2007:Q1 to 2008:Q2 for all four financial stress indicators. For the forecasts ending between 2008:Q3 and 2012:Q1, however, the original model null is never rejected. For the Taylor rule differentials model with the output gap, there is some evidence in favor of the alternative specification with the Ted spread, Bloomberg FCI, and OECD FCI.We also compare the out-of-sample performance of the Taylor rule models with the monetary, PPP, and interest rate differentials models. For the interest rate differentials model, the MSPE ratios are below one and the random walk can be rejected with the CW tests from 2007:Q1 to 2008:Q2. Starting with the panic period of the financial crisis in 2008:Q3, the MSPE ratios rise above one and the random walk null can only be rejected for the forecasts ending in 2009:Q1 and 2012:Q1. The monetary and PPP models cannot outperform the random walk for any forecast interval. The evidence of out-of-sample exchange rate predictability is much stronger with the Taylor rule models than with the traditional models.II. Exchange Rate Forecasting ModelsEvaluating exchange rate models out of sample was initiated by Meese and Rogoff (1983), who could not reject the naïve no-change random walk model in favor of the existent empirical exchange rate models of the 1970s. Starting with Mark (1995), the focus of the literature shifted toward deriving a set of long-run fundamentals from different models, and then evaluating out-of-sample forecasts based on the difference between the current exchange rate and its long-run value. Engel, Mark, and West (2008) use the interest rate implied by a Taylor rule, and Molodtsova and Papell (2009) use the variables that enter Taylor rules to evaluate exchange rate forecasts.A. Taylor Rule Fundamentals ModelWe examine the linkage between the exchange rate and a set of variables that arise when central banks set the interest rate according to the Taylor rule. Following Taylor (1993), the monetary policy rule postulated to be followed by central banks can be specified aswhere it is the target for the short-term nominal interest rate, πt is the inflation rate, is the target level of inflation, yt is the output gap, the percent deviation of actual real GDP from an estimate of its potential level, and R is the equilibrium level of the real interest rate.10According to the Taylor rule, the central bank raises the target for the short-term nominal interest rate if inflation rises above its desired level and/or output is above potential output. The target level of the output deviation from its natural rate yt is 0 because, according to the natural rate hypothesis, output cannot permanently exceed potential output.The target level of inflation is positive because it is generally believed that deflation is much worse for an economy than low inflation. The unemployment gap, the difference between the unemployment rate and the natural rate of unemployment, can replace the output gap in equation (1) as in Blinder and Reis (2005) and Rudebusch (2010). In that case, the coefficient γ would be negative so that the Fed raises the interest rate when the unemployment rate is below the natural rate of unemployment. Taylor assumed that the output and inflation gaps enter the central bank's reaction function with equal weights of 0.5 and that the equilibrium level of the real interest rate and the inflation target were both equal to 2 percent.The parameters and R in equation (1) can be combined into one constant term, , which leads to the following equation, where λ = 1 + ϕ. Because λ > 1, the real interest rate is increased when inflation rises, and so the Taylor principle is satisfied. Following Taylor (2008) and Curdia and Woodford (2010), the original Taylor rule can be modified by subtracting a multiple of the spread between the dollar Libor rate and the OIS rate, where st is the spread.We do not incorporate several modifications of the Taylor rule that, following Clarida, Galí, and Gertler (1998), are typically used for estimation. Lagged interest rates are usually included in estimated Taylor rules to account for either (a) partial adjustment of the federal funds rate to the rate desired by the Federal Reserve, or (b) desired interest rate smoothing on the part of the Federal Reserve. Since the most successful exchange rate forecasting specifications for the dollar/euro rate in MNP (2011) did not include a lagged interest rate and Walsh (2010) shows that the Federal Reserve lowered the interest rate during the financial crisis faster than would be consistent with interest rate smoothing, we do not include lagged interest rates. The real exchange rate is often included in specifications that involve countries other than the United States. Since there is no evidence that the ECB uses the real exchange rate as a policy objective and inclusion of the real exchange rate worsens exchange rate forecasts in MNP (2011), we do not include it. Finally, while inflation forecasts are often used on the grounds that Federal Reserve policy is forward looking, there is no publicly available data on euro area core inflation forecasts.To derive the Taylor rule based forecasting equation, we construct the implied interest rate differential by subtracting the interest rate reaction function for the Euro Area from that for the United States: where asterisks denote Euro Area variables and α is a constant. It is assumed that the coefficients on inflation and the output gap are the same for the United States and the Euro Area, but the inflation targets and equilibrium real interest rates are allowed to differ.11Based on empirical research on the forward premium and delayed overshooting puzzles by Eichenbaum and Evans (1995), Faust and Rogers (2003) and Scholl and Uhlig (2008), and the results in Gourinchas and Tornell (2004) and Bacchetta and van Wincoop (2010), who show that an increase in the interest rate can cause sustained exchange rate appreciation if investors either systematically underestimate the persistence of interest rate shocks or make infrequent portfolio decisions, we postulate the following exchange rate forecasting equation:12where asterisks denote Euro Area variables, ω is a constant, and ωπ, ωy, and ωs are positive coefficients. Alternatively, the unemployment gap differential (with opposite sign) can substitute for the output gap differential in equation (5).The variable et is the log of the US dollar nominal exchange rate determined as the domestic price of foreign currency, so that an increase in et is a depreciation of the dollar. The reversal of the signs of the coefficients between (4) and (5) reflects the presumption that anything that causes the Fed and/or ECB to raise the US interest rate relative to the Euro Area interest rate will cause the dollar to appreciate (a decrease in et). Since we do not know by how much a change in the interest rate differential (actual or forecasted) will cause the exchange rate to adjust, we do not have a link between the magnitudes of the coefficients in (4) and (5).13The difference between the US and Euro Area Ted spreads, Bloomberg FCIs, and OECD FCIs can also be used as the measure of the spread differential. An increase in the US spreads relative to the Euro Area spreads would cause forecasted dollar depreciation. Because the FCIs are constructed so that an increase represents an improvement in financial conditions, the sign of the coefficient on the FCI differentials would be negative so that a relative deterioration in US financial conditions would still lead to forecasted dollar depreciation.B. Taylor Rule Differentials ModelEngel, Mark, and West (2008) propose an alternative Taylor rule based model, which we call the Taylor rule differentials model to differentiate it from both the interest rate differentials model and the Taylor rule fundamentals model. They posit, rather than estimate, coefficients for the Taylor rule and subtract the interest rate reaction function for the Euro Area from that for the United States to obtain implied interest rate differentials,where the constant is equal to zero, assuming that the inflation target and equilibrium real interest rate are the same for the United States and the Euro Area. Out-of-sample exchange rate forecasting is conducted using single equation and panel error correction models.14We estimate a variant of the Taylor rule differentials model with two measures of economic activity–OECD estimates of the output gap and the unemployment gap. In order to obtain an implied interest rate differential that corresponds to the implied interest rate differential (6) with the unemployment gap as the measure of real economic activity, we use a coefficient of -1.0. This is consistent with a coefficient of 0.5 on the output gap if the Okun's law coefficient is 2.0.The Taylor rule differential model using Taylor's original coefficients would have a coefficient of 1.5 on the inflation differential, 0.5 on the output gap differential, and would not include the real exchange rate.15 During 2009 and 2010, a number of commentators, most notably Rudebusch (2010), argued that the appropriate output or unemployment gap coefficient in the Taylor rule for the United States should be double the coefficient in Taylor's original rule. While there has been an active policy debate on the normative question of whether prescribed Taylor rule interest rates should be calculated using Taylor's original specification or with larger coefficients, it is clear that the latter provide a better fit for Fed policy in the 2000s.16 Since the same argument has not been made for the ECB, we implement this by estimating a Taylor rule differentials model with a coefficient of 1.0 on the output gap (or -2.0 on the unemployment gap) for the United States and 0.5 on the output gap (or -1.0 on the unemployment gap) for the ECB, where α is a constant.The implied interest rate differential can be used to construct an exchange rate forecasting equation, where, as in the Taylor rule fundamentals model, the signs of the coefficients switch and we do not have a

  • Research Article
  • 10.15157/tpep.v22i2.11857
Unkonventionelle massnahmen der geldpolitik: eine kritische Beurteilung. Non-coventional measures of momentary policy: a critical assesment
  • Jan 1, 2014
  • Armin Rohde

In the present article a recently observable very expansive monetary policy and especially the additional use of so called non-conventional measures of monetary policy is discussed in the case of the European Central Bank The goal of research is to analyse first to what extend the non-conventional measures of monetary policy are useful instruments to support the monetary policy of the Eurosystem, which is focusing more strictly on developments of interest rates and interest rate levels than on developments of the quantity of money since 2003/2004, when the ECB had changed its monetary policy strategy. Secondly the non-conventional measures, implemented by the ECB, are observed at the background of the institutional arrangements of the Eurosystems recent monetary policy, which is characterized in short by free and unlimited allotment of central bank money. So if there exist no shortage of central bank money within the banking system of the Eurosystem the question has to be analyzed why for example measures of Quantitative Easing should be necessary to make monetary policy more efficient. Afterwards the intensions of the ECB to use non-conventional measures are discussed in detail. This involves an intensive look on the sense of the intended return of actually very low interest rates to the aim of the ECB of maintaining inflation rates below, but close to 2%. Also a critical look is thrown on the background of the intended sizeable impact on the balance sheet of the Eurosystem by purchasing bonds or securities or by using targeted long term refinancing operations (TLTROs). And at least dangers of the implicitly intended depreciation of the Euro exchange rates are discussed. All in all these intensions of the ECB seems to be not the right way to lead to a proper solution of actually existing economic problems within the Eurozone.

  • Research Article
  • 10.1086/593162
Discussion
  • Jan 1, 2008
  • NBER Macroeconomics Annual

Discussion

  • Research Article
  • 10.15157/tpep.v24i1.12983
Eine neue Rolle für die Europäische Zentralbank? Anmerkungen zu einem spezifisch Deutschen Konflikt. A new role for the European Central Bank? Remarks on a special German conflict
  • Feb 11, 2017
  • Detlev Ehrig

The recent European debt crisis has generated a growing importance of the ECB. The central bank was forced to take measures far beyond its traditional role to stabilize monetary markets and inflation. The European central bank has adopted a new function as lender of last resort, providing banks and governments with almost unlimited liquidity. The new and unconventional monetary policy has been hotly debated. In Germany the debate even reached the Constitutional Court. The article will give a survey of the debate referring to the arguments of the ECB versus Deutsche Bundesbank. It is indeed questionable whether the ECB has a mandate for its new role in monetary and by the way also fiscal policy. Whatever the arguments are convincing, new steps towards fiscal arrangements and a deeper political cooperation are needed to stabilize the euro area.

  • Research Article
  • Cite Count Icon 2
  • 10.1086/594132
Comment
  • Jan 1, 2008
  • NBER Macroeconomics Annual
  • Harald Uhlig

Previous articleNext article FreeCommentHarald Uhlig, Harald UhligUniversity of Chicago Search for more articles by this author , University of ChicagoFull TextPDF Add to favoritesDownload CitationTrack CitationsPermissionsReprints Share onFacebookTwitterLinked InRedditEmailQR Code SectionsMoreThe paper by Boivin, Giannoni, and Mojon seeks to understand the transmission mechanism of monetary policy in the euro area and its constituent countries, document its change since the creation of the euro, and provide a structural interpretation by means of an open‐economy model. To do so, it is building on state‐of‐the‐art modeling techniques, most notably Bernanke, Boivin, and Eliasz’s (2005) factor‐augmented vector autoregressive (FAVAR) approach for the empirical part and Ferrero, Gertler, and Svensson’s (forthcoming) open‐economy dynamic stochastic general equilibrium (DSGE) model for the structural interpretation. The authors combine both with several innovations, well described in the paper, most notably adding a risk premium on intra‐area exchange rates. They report estimated responses to monetary policy largely consistent with conventional wisdom. They document that the creation of the euro has contributed to a widespread reduction in the effect of monetary policy shocks. They interpret this as stemming not only from the adoption of a single currency but also from European Central Bank policy, shifting toward a more aggressive response to inflation and output.Boivin et al.’s paper exemplifies the best of research that is currently done at central banks as well as in a number of academic departments, seeking to understand aggregate fluctuations and the role of monetary policy from both an empirical and a theoretical perspective. These approaches have started to replace the educated guesses with a serious analysis based on state‐of‐the‐art modeling as the starting point for policy debates. That, in principle, is a good development.Therefore, I hope that the approach taken here is right. But I fear that severe problems remain and that the route taken here is not yet convincing enough for others to follow. Below I shall explain why, including material found subsequently to my presentation in Boston. Much of what is stated here may apply with equal force to the predecessors on which the paper at hand is built, and that may seem like a good defense for the authors. But this is their paper in the end, and it is their choice which methodology to apply. Therefore, it is only fair to raise these points here.I need to warn the reader that this is a discussion. My aim shall be to throw up some challenges and questions and to provoke further thinking on some of these issues. Whether these are fatal flaws or whether all this can be repaired or whether everything is all right after all is something that future research urgently needs to clarify before this approach should be put to wider use. With this disclaimer, let me get in medias res.I. The FAVAR ModelThere are three basic premises of the empirical approach. First, there is considerable comovement in the selected macroeconomic time series so that their most relevant dynamics is captured by a few factors. Second, the strategy here correctly captures the dynamics associated with monetary policy shocks and correctly identifies their effects. Third, the data are sufficiently informative about the changing impact of monetary policy after the introduction of the euro. I am skeptical about all three.II. Is There Comovement in European Data?The idea that macroeconomic variables comove has considerable appeal in the United States, but perhaps less so in Europe, with its diverse set of countries. Nonetheless, the $$R^{2}$$’s reported by Boivin et al. in their table 1 seem impressive and convincing.But I was still skeptical. If indeed a few factors explain most of what is going on, then the sum of the few largest eigenvalues of the variance‐covariance matrix of the data should be near the entire sum of all eigenvalues: that ratio is essentially the $$R^{2}$$’s of all variables on the factors corresponding to these eigenvalues. In fact, one would want more: one would want that sum to be considerably larger than in an artificial data set, generated with the same univariate autocovariance structure as in the data, but no comovement among the artificially generated series.So, I did the following (and I am grateful to the authors for sharing their data set with me to do this). I transformed the data from 1987:Q1 to 2007:Q3 by taking the difference of the log of the current value and its fourth lag and multiplying by 100, except for interest rates, unemployment rates, and capacity utilization: that way, all data are in percents. This appears to be the transformation chosen by the authors. I call this my baseline data set. I calculated eigenvalues in three ways. First, I took the eigenvalues of the variance‐covariance matrix of the baseline data set, summing the largest and calculating the ratio of those partial sums to the total sum. Next, I took the residuals from a regression of the data on current oil and short‐term interest rates, that is, series 1 and 243, and a constant and calculated the eigenvalues from the variance‐covariance matrix of these residuals (and, as an aside, that seemed to me to be a simpler approach than what the authors have done). Finally, I rescaled all time series to have the same standard deviations before calculating the regression and the eigenvalues of the residuals: from discussions with the authors, it may be that this is closest to the route they have chosen. The results can be seen in figure 1, which lists the number of factors (or largest eigenvalues) on the x axis and the fraction of the total sum of eigenvalues on the y axis. For the x axis I stopped at 30 factors, although there would be 243 (or 83) in principle. One can see that 11 factors in the nonrescaled version explain about 90%, seven factors get you to about 80% (coinciding roughly with the individual series results in table 1 of the authors), and five factors (think: above and beyond short‐term rates and oil) explain about 75%. This initially looks like good news for the approach taken by the authors.Fig. 1. Calculated factors and their contribution to overall variance. Three methods of calculating eigenvalues. Authors’ original data. This appears to look good.View Large ImageDownload PowerPointNext, I calculated the first‐order autocorrelations of my baseline data set. I then generated an artificial data set as a set of independent AR(1) processes, driven by normally distributed shocks and with the calculated autocorrelations, starting at zero (rather than a draw from the stationary distribution) and rescaled, so that each artificial series has the same standard deviation as the corresponding series in the data. I redid the exact same calculation of the contribution of the factors as above, using the new artificial data series 1 and 243 as regressors: while they have the same autocorrelation as the original data series, there is obviously no reason to expect them to have any explanatory power for the other series. In fact, in the artificial data set, there is no genuine comovement among the series at all.The result for the artificial data set can be seen in figure 2. I would have expected that figure to be quite different from figure 1 and the factors with the largest eigenvalues to explain considerably less than in the original data set. But the figures look surprisingly and uncomfortably alike. When I first saw a first version of this figure, I thought that it had to be due to a programming error, accidentally storing the figure coming from the data. But it is really the figure coming from the random data. Yes, there are differences. One factor explains as much now, for either of the three methods. It takes a few more factors to get to the same fraction of variance explained. At five factors for the residual, one is at about 60% rather than 75%. Seven factors deliver about 70% for the baseline random data rather than 80% in the original baseline data. And 12 factors are at 85% rather than 90%. For the residuals from the scaled data, the differences are even somewhat bigger. New random draws will generate slightly different pictures anyhow. “Slightly” is important here. The differences from figure 1, while there, remain strikingly small.Fig. 2. Like fig. 1 but applied to artificial data: independent AR(1)’s, with autoregressive coefficients distributed as in the original data. This figure is not much different from fig. 1, even though there are no “true” factors in the artificial data. Thus perhaps in the original data, too, the true factors may account for much less comovement in the original data than fig. 1 or the authors’ calculations would lead one to believe.View Large ImageDownload PowerPointThe reason is easy to explain but perhaps tricky to formalize. There is considerable autocorrelation in the data. Figure 3 shows the autocorrelation coefficients, calculated by ordinary least squares and sorted by size: many are close to unity. With persistent roots, deviations from the mean will linger for many periods. Thus, the calculated correlation of two series with persistent roots may easily appear to be large in a finite sample, even though there is none asymptotically. The factors extracted from a finite sample interpret these large correlations as comovements, even though there is none. It all works nicely asymptotically; it just does not work in the short sample at hand and with the large autocorrelations that are in the data. There may be ways around this problem, for example, by prewhitening the series or, at the least, by calculating the factors from the residuals of univariate AR(1) regressions. But this is not what the authors appear to have done.Fig. 3. Distribution of the AR(1) coefficients in the original data, when fitting univariate AR(1)’s to each series. The artificial data for fig. 2 were created as independent AR(1)’s, with the same distribution of AR(1) coefficients.View Large ImageDownload PowerPointSo in sum, I fear that the approach taken and the evidence presented by the authors are quite consistent with a world in which there is no comovement among the series at all, and they are probably perfectly consistent with a world in which only very few factors matter at the European scale, but explaining considerably less than what the authors make us believe. And without such comovement or too little variation explained by too few factors, the approach has severe problems.III. Are Monetary Policy Shocks Identified and Identified Correctly?But let me give Boivin et al. the benefit of the doubt and hope that my arguments or calculations turn out to be somehow incorrect or not appropriate. That is, suppose that the authors did indeed capture the key comovements and 80% of the variance in the data with their seven factors, including interest rates and oil, even if the sample was truly large. Did they correctly identify monetary policy shocks? I have my doubts.For starters, it may be that all the movements due to monetary policy shocks have dropped from the sample, once one concentrates on the movement explained by the factors. Cochrane (1994) and many others have argued that monetary policy shocks explain no more than 20% of the movement in the data. It could be that much or even all of that is in the 20% not explained by the leading factors. It is easy to see how this can happen when extracting factors in an unrestricted manner. The authors smartly include the key monetary policy instrument in their factors, but even then, it could happen if the majority of the interest rate movements are not due to monetary policy shocks and if other parts of the movement in interest rates get captured by the seven‐factor dynamics and across‐variable correlation.To be more specific, it is worrisome that the fractions explained for M1 and M3 by the factors are among the lowest of all the series (see table 1). We used to think that moving money or moving interest rates is just as good a tool for a central bank to pick a particular point on the demand curve for money. But table 1 would have to be read as if that demand curve is subject to huge and idiosyncratic fluctuations having nothing to do with the rest of the economy. To put it differently, according to these estimates, money has little or nothing to do with monetary policy and the main movements in aggregate activity, but rather has a life on its own. If you believe this, you have an interesting research agenda at hand.But even leaving these arguments aside, I seriously wonder whether the approach to identify monetary policy shocks is reasonable. Section IV. A states that it is assumed that “the latent factors … and the oil price inflation … cannot respond contemporaneously to a surprise interest rate change.” The argument for this approach is in Bernanke et al. (2005), in which the authors argue that the movement in the factors is movement due to “slow‐moving” variables, since any additional systemic movement in the “fast‐moving” variables is one‐dimensional; they interpret this as being largely explained by the surprise in interest rate movements. But there are no such things as slow‐moving variables. After all, all variables have a nonzero one‐step‐ahead prediction error: they thus move fast with respect to something. The identifying assumption here really is that whatever it is they are reacting to contemporaneously and quickly, it cannot be monetary policy. Why should that be the case? If inflation and employment can suddenly jump a bit because of shifts in market demands, why can they not do so when monetary policy surprisingly changes interest rates?The defense seems to be that the impulse responses look conventional. But they don’t. As figure 1c in Boivin et al.’s paper shows, consumer price index inflation in Germany, France, Italy, and the euro area as a whole tends to move up rather than down after a monetary tightening, and wage inflation moves up in Germany, Italy, and Spain. Additionally, these responses are estimated with a fairly wide error band. The reaction seems to be somewhere between −0.3% and 0.3% in the year following the shock. By contrast, the reaction of GDP is fairly sharp and always down, ranging from −1% or below to about −0.2% in the year following the shock. That seems large compared to the (non)movement in inflation.A more convincing approach to identification is to employ the conventional wisdom and therefore sign restrictions for identification, as I have proposed in Uhlig (2005). With a panel of macroeconomic time series and a factor approach, as in the paper at hand, there are considerably more sign restrictions that can aid in identification, and the methodology then provides for considerably sharper bounds as well as reasonable results (see Ahmadi and Uhlig 2008).IV. Are the Data Informative about the Change after the Introduction of the Euro?I do not need to answer that question. Boivin et al. themselves provide ample warning in their paper that this is not so. Note in particular that no error bands have been provided to the post‐euro responses in figures 1a–c or the comparison pictures. Be wary of econometricians who draw conclusions by comparing means without telling you the degree of uncertainty! It is a fair guess that it is large. There simply was not much time‐series variation in monetary policy since the introduction of the euro. Figure 4 shows what is going on: large and heterogeneous movements in interest rates before the introduction of the euro. Hardly any movements afterward.Fig. 4. Short‐term interest rates in the EMU, authors’ data: euro area, Germany, France, Italy, Spain, Netherlands, and Belgium.View Large ImageDownload PowerPointThe authors are probably happy that the impulse responses did not change too dramatically for several key variables. Unfortunately, there are some in which the responses did change, leading us even further away from conventional wisdom. Consumption moves up after a monetary tightening. M3 moves up substantially now after a monetary tightening, quite in contrast to what happened before the euro.One explanation within the philosophy of the authors is that post‐euro monetary policy shocks identified here are really capturing movements in the stock market. For suppose that there are practically no monetary policy shocks and that monetary policy is instead also reacting to movements in some other fast‐moving variable, such as the stock market. Suppose an econometrician knew that and wanted to identify stock market surprise movements above and beyond those of slow‐moving variables. That econometrician would have proceeded exactly as the authors did, except that the impulse responses now would have to be interpreted as responses to stock market shocks rather than monetary policy shocks. How can one tell them apart? Again, sign restrictions might help.V. The Structural ModelThe paper complements the empirical analysis with a structural model that allows one to interpret the data from that vantage point. The key difficulty for this model is to explain the interest rate convergence in figure 4, happening without correspondingly large inflation differences. The authors readily admit this problem in Section V.C.1, when they write that “the basic version of the model cannot replicate the transmission of monetary policy observed in low‐credibility regimes since long‐term rates are tightly tied to expected future riskless short‐term rates.” One possibility would be to scrap the model at this point.The authors instead invent a clever deus ex machina: shocks to the uncovered interest rate parity (UIP) condition, which furthermore are tied with a key parameter to foreign (or “German”) monetary policy shocks (see Sec. V.B.3). Let me put it differently. Most of the interesting action in monetary policy in Europe over the last 20 years is the convergence process seen in figure 4. The authors sweep all that away by an add‐on to the UIP condition, which, however, has no further implications for aggregate dynamics. Next, they then seek to study how the changes in monetary policy from the pre‐euro regime to the post‐euro regime have affected macroeconomic variables. Shouldn’t one worry a bit that the baby has already been thrown out with the bath water? There is something really interesting happening here: it is the major big thing in the transition to the euro. We cannot quite put it into the theory, so thus let us ignore it? Shove it into a random shock, leaving everything else unchanged?I can see the desperation of the authors here, and I laud them for their frankness. Figure 4 is hard to explain within this theory. It is my guess too that it has a lot to do with perceptions of risk and updating the probabilities of membership in the European Monetary Union (EMU). So, having gotten so far in setting up this beautiful model and all, I understand that the quick fix of declaring it to be completely uninteresting and tangential was a way to proceed with the rest. But here is a memo to subsequent research: forget about the rest and instead put this at center stage, to understand the role of changing monetary policy in Europe!The authors instead plug in reaction coefficients of monetary policy, which are not obtained from the previous empirical exercise and not obtained from estimating the structural model, but instead from another empirical exercise described in Section V.B.2. One has to wonder whether this is consistent with the initial FAVAR approach or with the structural model at hand. Anyway, given that they use different coefficients before and after EMU, they find different quantitative results of their model. This is what the main comparison of pre‐euro and post‐euro in the paper rests on. Perhaps a more serious subsample stability test, using the structural DSGE model for estimation rather than an auxiliary model (or, at least, a with the of would be more to the empirical approach of the first of the paper is rather and more than may appear at For example, the identification assumption that all other variables do not to monetary policy shocks within the is But that essentially just from an of is the difference between a monetary policy that is happening at the of a and a monetary policy happening at the of the It just on the artificial way the time is up into periods. In figure one could read the impulse responses by moving them to the by one and having all variables within the to the monetary policy shock. it would be interesting to the monetary policy shocks as identified by the DSGE model to the monetary policy shocks as identified by the this a good model to study the impact of monetary policy in Note that there is no here. There is no that could get in from There is no worry about less when interest rates because there is no and no in the model to the of European and the many by policy and the are also essentially only that monetary policy about is due to the of for the model, and here are no different from But exchange rates substantially more than post‐euro inflation If are the main monetary policy perhaps be much more on the that the exchange rate from to the in the of than the created by some not being to but others differently, is the main of that it cannot its for in is it more important to them that their as they are to at wage which while the value of the And if that is the more important how much might that have a role in the transition to the for monetary policy, as a currency and for monetary policy sum, the model here is at the current of quantitative research on monetary policy. But I am that several of the most interesting which really matter for monetary policy and really matter for the transition to the EMU, have been out before the analysis has even And if so, then the problems with this approach are severe We then to different to the main of monetary Boivin et al.’s I really some may The paper is a analysis at the current of research and among the best that one can find on the at hand, building on the best I the authors for what they have this is no I am that the approach is and that can serious monetary policy discussions on this I fear that severe problems remain and that the route taken here is not yet convincing enough for others to follow. I have fear that the approach taken and the evidence presented by the authors are quite consistent with a world in which there is no comovement among the series at they are probably perfectly consistent with a world in which only very few factors matter at the European scale, but explaining considerably less than what the authors make us believe. And without such comovement or too little variation explained by too few factors, the approach has severe if there are factors, the monetary policy shocks may be the price and the sharp reaction of compared to the reaction of needs to the deus ex of shocks to UIP in to explain what be the key of monetary policy in Europe, convergence of interest rates (see fig. other key of central to monetary policy no role in the And if so, then the problems with this approach are severe We then to different to the main of monetary there is a between the and the this is for Whether these are fatal flaws or whether all this can be repaired or whether everything is all right after all is something that future research urgently needs to clarify before this approach should be put to wider and Harald the of Monetary Policy A FAVAR with University and University of in Boivin, and Monetary A (FAVAR) of in on Policy in Gertler, and and Monetary In of Monetary and University of Chicago in Are the of a to Monetary from an of Monetary in Previous articleNext article by by the of are reported from by the of the following articles this of the of large factor with factors, of The Monetary

  • Research Article
  • Cite Count Icon 5
  • 10.14666/2194-7759-9-1-003
Impact of the European Central Bank Monetary Policy on the Financial Indicators of the Eastern European Countries
  • Apr 22, 2020
  • Sergey Yakubovskіy + 2 more

The article presents the study results of the European Central Bank Monetary Policy influence on the Poland, Hungary, Czech Republic, and Slovak Republic financial indicators. By running vector autoregression models and applying Granger causality tests the study reveals the impact of the European Central Bank Monetary Policy on the yield of government bonds, interest rates and the inflow of foreign investments into the CEE countries. The results of the analysis demonstrate that the ECB monetary policy had an overall positive impact on the economies of Poland, Hungary and Czech Republic. In the context of a general decrease of interest rates under the influence of the ECB's unconventional monetary policy, these countries managed to achieve sustainable economic growth along with a decrease in the ratio of government debt to GDP and the ratio of interest payable on debt to GDP as well as stock indices growth. The opposite situation is observed in the euro area countries with a high debt burden, primarily in Greece and Italy. Although the ECB policy had led to the decrease of the interest payable on debt to GDP of the high debt euro area countries, the trend of the ratio of government debt to GDP growth for them (except Ireland) has an upward trend. In this situation, the ECB cannot significantly change the goals of its monetary policy, because any, even slight, increase in the discount rate will lead to a new euro area debt crisis with an epicenter in Italy and Greece. The situation may get worse after a probable sharp decline in the US stock market, caused by its current overheating

  • Dissertation
  • 10.25394/pgs.8846342.v1
Monetary Policy and Heterogeneous Labor Markets
  • Aug 13, 2019
  • Pritha Chaudhuri

Labor market indicators such as unemployment and labor force participation show a significant amount of heterogeneity across demographic groups, which is often not incorporated in monetary policy analysis. This dissertation is composed of three essays that explore the effect of labor market heterogeneity on the design and conduct of monetary policy. The first chapter, Effect of Monetary Policy Shocks on Labor Market Outcomes, studies this question empirically by looking at dynamics of macroeconomic outcomes to a monetary policy shock. I construct a measure of monetary policy shock using narrative methods that represent the unanticipatory changes in policy. Impulse response of unemployment rates for high and low-skill workers show low-skill workers bear a greater burden of contractionary monetary policy shock. Their unemployment rates increase by almost four times that of the high-skill group. Even though we see differences in dynamic response of unemployment rates, the empirical analysis shows some puzzling results where effects of contractionary shock are expansionary in nature. Moreover, these results are plagued by the “recursiveness assumption” that the shock does not affect current output and prices, which is at odds with theoretical models in the New Keynesian literature. In the second chapter, Skill Heterogeneity in an Estimated DSGE Model, I use a structural model to better identify these shocks and study dynamic responses of outcomes to economic shocks. I build a dynamic stochastic general equilibrium model, which captures skill heterogeneity in the U.S. labor market. I use Bayesian estimation techniques with data on unemployment and wages to obtain distribution of key parameters of the model. Low-skilled workers have a higher elasticity of labor supply and labor demand, contributing to the flatness of the wage Phillips curve estimated using aggregate data. A contractionary monetary policy shock has immediate effects on output and prices, lowering both output and inflation. Moreover, it increases unemployment rates for both high and low-skill groups, the magnitude being larger for the latter group. The presence of labor market heterogeneity will have new implications for the design of monetary policy, that I study in the third chapter, Optimal Monetary Policy with Skill Heterogeneity. I design an optimal policy for the central bank where policymakers respond to the different inflation-unemployment trade-off between high and low-skill workers. The monetary authority must strike a balance between stabilization of inflation, GDP and outcomes of high and low-skill workers separately. This optimal policy can be implemented by a simple interest rate rule with unemployment rates for high and low-skill workers and this policy is welfare improving.

  • Preprint Article
  • Cite Count Icon 1
  • 10.17863/cam.39163
Policy Shocks and Wage Rigidities: Empirical Evidence from Regional Effects of National Shocks
  • Apr 26, 2017
  • Maarten De Ridder + 1 more

This paper studies the effect of wage rigidities on the transmission of fiscal and monetary policy shocks. We calculate downward wage rigidities across U.S. states using the Current Population Survey. These estimates are used to explain differences in the state-level economic effects of identical national shocks in interest rates and taxes. In line with the role of sticky wages in New Keynesian models, we find that contractionary monetary policy and tax shocks increase unemployment and decrease economic activity in rigid states considerably more than in flexible states. We also find larger and more persistent effects of monetary and tax policy shocks for states where the ratio between minimum and median wage is higher and for states that do not have right-to-work legislation.

  • Research Article
  • 10.15157/tpep.v20i2.840
Wandel in der Geldpolitik und Ausweitung der europäischen Schuldenkrise
  • Jan 1, 2012
  • Armin Rohde

In the present paper fundamental changes in monetary policy strategy, especially in leading industrialized countries, from concentrating on development of the quantity of money to focusing strictly on developments of interest rates and interest rate levels is analysed in the case of the European Central Bank. The goal of research is to show that this change in monetary policy is one important reason for expansions of the European debt crisis since 2010. Within this new monetary policy regime, interest rate levels on the one hand and the transmission of interest rate policy measures to the interest rates of the other financial markets on the other hand, have become a very important role in central banking policies of today. For example to guarantee a sure and strict transmission of interest rate policy measures, which have its starting point on the short-term money market, and which should radiate to long-term interest rates on the capital market the ECB, if necessary, uses outright monetary transactions in secondary markets for sovereign bonds in the euro area. The main reason for acting in that manner is to avoid rising of long-term interest rates and to guarantee low levels of interest rate in the sovereign bonds markets. It is argued in the present paper that in focusing to guarantee low interest rate levels especially to overcome economic crisis you can find one important reason for expansions of the European debt crisis. This is so, because in view of current European economic policy rising interest rates in the European bonds markets above a level of 6 or 7 percent have become an unmistakeable proof of a severe debt problem.

  • Preprint Article
  • 10.2866/892636
Yield curve modelling and a conceptual framework for estimating yield curves: Evidence from the European Central Bank's yield curves
  • Jan 1, 2018
  • Per Nymand-Andersen

The European Central Bank (ECB), as part of its forward-looking strategy, needs high-quality financial market statistical indicators as a means to facilitate evidence-based and sound decision-making. Such indicators include timely market intelligence and information to gauge investors' expectations and reaction functions with regard to policy decisions. The main use of yield curve estimations from an ECB monetary policy perspective is to obtain a proper empirical representation of the term structure of interest rates for the euro area which can be interpreted in terms of market expectations of monetary policy, economic activity and inflation expectations over short-, medium- and long-term horizons. Yield curves therefore play a pivotal role in the monitoring of the term structure of interest rates in the euro area. In this context, the purpose of this paper is twofold: firstly, to pave the way for a conceptual framework with recommendations for selecting a high-quality government bond sample for yield curve estimations, where changes mainly reflect changes in the yields-to-maturity rather than in other attributes of the underlying debt securities and models; and secondly, to supplement the comprehensive - mainly theoretical - literature with the more empirical side of term structure estimations by applying statistical tests to select and produce representative yield curves for policymakers and market-makers.

  • Dissertation
  • 10.4225/03/589bc85e10d5d
Real exchange rate movements in developed and developing economies
  • Feb 9, 2017
  • Taya Dumrongrittikul

The aim of this thesis is to combine economic theory and empirical analysis in an effort to understand the dynamic effects of real exchange rate determinants, policies and global factors on real exchange rates. This thesis comprises three related essays. The first essay examines the validity of the Balassa-Samuelson hypothesis (BSH). This study introduces a new approach for classifying traded and non-traded industries which allows for country-specific heterogeneity and trade endogeneity, and then uses this classification in the construction of a model that allows for the Balassa-Samuelson effect. We find that in developed countries, productivity growth in traded sectors leads to a real depreciation, inconsistent with the BSH; however, higher economic growth will be followed by a real appreciation. The results of developing countries support the BSH, although persistence profiles show slow speeds of convergence. The second essay extends the analysis into a general model of real exchange rates. It investigates the impact of trade liberalisation, productivity growth, monetary policy and government consumption on real exchange rates in four panels of countries consisting of European, non-European developed, Asian developing and non-Asian developing countries. The analysis is based on a panel structural vector error correction model augmented with foreign variables, and a Bayesian approach is used to implement sign restrictions with a penalty function for undertaking impulse response analysis. We find that trade liberalisation generates depreciation and higher government consumption causes persistent appreciation. A contractionary monetary policy shock has only short-run impact on real exchange rates, corresponding to the long-run neutrality of monetary policy. Traded-sector productivity gains cause an impact appreciation in Asian developing countries and lead to persistent appreciation in non-Asian developing countries, whereas the shocks induce long-run depreciation in developed countries, in line with the results in the first essay. The third essay combines the four panels of countries into a Global Vector Autoregressive (GVAR) model to examine how real exchange rates and key macroeconomic variables respond to an oil price shock, a US monetary policy shock and simultaneous shocks to productivity in four large Asian emerging economies. Using a sign restricted impulse response approach, we find that an oil price shock causes a depreciation of the US dollar as well as economic recession and excessive inflation in the global economy. The way in which monetary policy deals with the shock matters for the long-run level of economic activity. An unexpected US monetary tightening causes an appreciation of the US dollar and a fall in real GDP and inflation over the long run. The monetary policy reaction to this change seems to be stronger in developing countries than in developed countries. Simultaneous shocks to traded-sector productivity in China, India, Korea and Indonesia induce a rise in real GDP and currency appreciation in these four countries. Meanwhile, many Asian countries benefit from the shocks with higher productivity and GDP. The value of their currency is likely to appreciate.

  • Dissertation
  • 10.11588/heidok.00011560
Decision Rules, Transparency and Central Banks
  • Jan 1, 2011
  • Bernhard Johannes Köster

The trade-off between price stability and output stabilization is in the centre of monetary policy-making. This trade-off enters many macroeconomic models as the central bank is assumed to minimize some loss function consisting of inflation deviations and output deviations from some specific targets. The policy instrument to control these variables is the short-term interest rate. Monetary policy-making is usually conducted in committees, whose members may have conflicting interests. This is evident for the Governing Council of the European Central Bank or the Board of Governors of the Federal Reserve System in the United States. In this thesis we take a closer look at monetary policy committees. In particular, we address how decision rules and transparency requirements concerning such rules in monetary policy committees should be designed. In particular we concern ourself with the following two issues: 1. Which type of majority rule should be applied in the monetary policy committee? 2. Should the public know which decision rule the monetary policy committee applies and should the central bankers release their information about economic shocks? To address these questions, standard monetary models with aggregate demand and supply shocks are introduced and we assume that a committee decides about the interestrate change according to some voting rule. We develop a flexible majority rule, where the majority for interest-rate changes depends itself on the size of the interest-rate change. Our main findings are: First, a well-designed flexible majority rule can improve welfare compared to a fixed majority rule in a simple shock structure. This insight is robust, if we apply more complex shock structures or if we introduce a simple dynamic setup. Second, transparency regarding the rule has ambiguous effects on welfare and it may not be necessary to publish the decision rule, but within our framework, we can provide a best combination of a decision rule and an information setup.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.