Abstract

A class of martingale estimating functions is convenient and plays an important role for inference for nonlinear time series models. However, when the information about the first four conditional moments of the observed process becomes available, the quadratic estimating functions are more informative. In this paper, a general framework for joint estimation of conditional mean and variance parameters in time series models using quadratic estimating functions is developed. Superiority of the approach is demonstrated by comparing the information associated with the optimal quadratic estimating function with the information associated with other estimating functions. The method is used to study the optimal quadratic estimating functions of the parameters of autoregressive conditional duration (ACD) models, random coefficient autoregressive (RCA) models, doubly stochastic models and regression models with ARCH errors. Closed‐form expressions for the information gain are also discussed in some detail.

Highlights

  • Godambe 1 was the first to study the inference for discrete time stochastic processes using estimating function method

  • We study the linear and quadratic martingale estimating functions and show that the quadratic estimating functions are more informative when the conditional mean and variance of the observed process depend on the same parameter of interest

  • We show that the optimal quadratic estimating function is more informative than the estimating function used in Thavaneswaran and Abraham 2

Read more

Summary

Introduction

Godambe 1 was the first to study the inference for discrete time stochastic processes using estimating function method. We study the linear and quadratic martingale estimating functions and show that the quadratic estimating functions are more informative when the conditional mean and variance of the observed process depend on the same parameter of interest. In the class of all zero mean and square integrable martingale estimating functions M, the optimal estimating function g∗n θ which maximizes, in the partial order of nonnegative definite matrices, the information matrix. The maximum correlation between the optimal estimating function and the true unknown score justifies the terminology “quasi-score” for g∗n θ It follows from Lindsay 8, page 916 that if we solve an unbiased estimating equation gn θ 0 to get an estimator, the asymptotic variance of the resulting estimator is the inverse of the information Ign. the estimator obtained from a more informative estimating equation is asymptotically more efficient

General Model and Method
Random Coefficient Autoregressive Models
Doubly Stochastic Time Series Model
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call