Formulas are derived for computing asymptotic covariance matrices of sets of impulse responses, step responses, or variance decompositions of estimated dynamic simultane- ous-equations models in vector autoregressive moving-average (VARMA) form. Com- puted covariances would be used to test linear restrictions on sets of impulse responses, step responses, or variance decompositions. The results unify and extend previous formulas to handle any model in VARMA form, provide accurate computations based on analytic derivatives, and provide insights into the structures of the asymptotic covariances. FOLLOWING THE LEAD OF SIMS (1980), impulse responses, step responses, and variance decompositions have become standard tools for examining the empiri- cal validity of economic theories or interpreting the forecasting and policy implications of estimated linear dynamic models. Runkle (1987) demonstrated the importance of accounting for sampling variability in these exercises. He computed confidence bounds for impulse responses and variance decomposi- tions derived from estimated vector autoregressive (VAR) models with sto- chastic simulations and asymptotic normal approximations. The two methods produced comparable results, but the simulations took about 160 times longer to compute than normal approximations based on numerical derivatives. Whereas Runkle only reported confidence bounds for individual impulse re- sponses and decomposed variances of VAR models, the present paper derives equations for computing full asymptotic covariance matrices of sets of impulse responses, step responses, or variance decompositions of estimated dynamic simultaneous-equations models in autoregressive moving-average form. The computed covariances may be used to jointly test multiple linear restrictions-or linear approximations of differentiable nonlinear restrictions-on sets of im- pulse responses, step responses, or variance decompositions of estimated mod- els. Because the derived computational equations are based on analytical derivatives, they have at least two advantages: they lead to more accurate and often faster computations than those based on numerical derivatives; they provide insight into the structures of asymptotic covariances of the estimated quantities. By allowing for various identifying restrictions, the paper extends and unifies results of Schmidt (1973), Evans and Wells (1986), and Liutkepohl (1988, 1989,
Read full abstract