Abstract

This paper looks at the strong consistency of the ordinary least squares (OLS) estimator in linear regression models with adaptive learning. It is a companion to Christopeit & Massmann (2018) which considers the estimator’s convergence in distribution and its weak consistency in the same setting. Under constant gain learning, the model is closely related to stationary, (alternating) unit root or explosive autoregressive processes. Under decreasing gain learning, the regressors in the model are asymptotically collinear. The paper examines, first, the issue of strong convergence of the learning recursion: It is argued that, under constant gain learning, the recursion does not converge in any probabilistic sense, while for decreasing gain learning rates are derived at which the recursion converges almost surely to the rational expectations equilibrium. Secondly, the paper establishes the strong consistency of the OLS estimators, under both constant and decreasing gain learning, as well as rates at which the estimators converge almost surely. In the constant gain model, separate estimators for the intercept and slope parameters are juxtaposed to the joint estimator, drawing on the recent literature on explosive autoregressive models. Thirdly, it is emphasised that strong consistency is obtained in all models although the near-optimal condition for the strong consistency of OLS in linear regression models with stochastic regressors, established by Lai & Wei (1982a), is not always met.

Highlights

  • This paper looks at the strong consistency of the ordinary least squares (OLS) estimator in a stereotypical macroeconomic model with adaptive learning

  • The paper examines, first, the issue of strong convergence of the learning recursion: It is argued that, under constant gain learning, the recursion does not converge in any probabilistic sense, while for decreasing gain learning rates are derived at which the recursion converges almost surely to the rational expectations equilibrium

  • The dynamics of at can be written as at = (1 − c) at−1 + γ (δ + εt) where c = (1 − β) γ as in (8). It is well-known in the literature that constant gain recursions do not in general converge to the rational expectations equilibrium (REE)

Read more

Summary

Introduction

This paper looks at the strong consistency of the ordinary least squares (OLS) estimator in a stereotypical macroeconomic model with adaptive learning. Autoregressive models with intercept have frequently been treated within the framework of identification and control of dynamic systems, namely as input-output systems with a single constant input, cf Lai & Wei (1982a, Section 3) This framework could be used for establishing a.s. convergence rates for the norm of the joint, i.e. bivariate, OLS estimator of θ = (β, δ). We establish the strong consistency of the OLS estimators of δ and β in the constant and decreasing gain learning models as well as the rates at which they converge almost surely. This is a novel undertaking and will be of use when interest lies on the long-run behaviour of trajectories. All convergence statements are of the almost sure (a.s.) type unless otherwise indicated

Constant gain
Decreasing gain
Joint estimation of the parameters
Comparison
Conclusion
A Proof of Theorem 1
Prerequisites
Path properties
Eigenvalues of the moment matrix
Generalities
Estimation of the slope
Estimation of the intercept
Stable case revisited
Proof of Theorem 3
Stable case
Unit root case
Asymptotics of A0T
Consistency
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call