Abstract

Introduction The class of models discussed in Parts ONE and TWO of the book assume that the specification of the likelihood function, in terms of the joint probability distribution of the variables, is correct and that the regularity conditions set out in Chapter 2 are satisfied. Under these conditions, the maximum likelihood estimator has the desirable properties discussed in Chapter 2, namely that it is consistent, asymptotically normally distributed and asymptotically efficient because in the limit it achieves the Cramer-Rao lower bound given by the inverse of the information matrix. This chapter addresses the problem investigated by White (1982), namely maximum likelihood estimation when the likelihood function is misspecified. In general, the maximum likelihood estimator in the presence of misspecification does not display the usual properties. However, there are a number of important special cases in which the maximum likelihood estimator of a misspecified model still provides a consistent estimator for some of the population parameters in the true model. As the maximum likelihood estimator is based on a misspecified model, this estimator is referred to as the quasi-maximum likelihood estimator. Perhaps the most important case is the estimation of the conditional mean in the linear regression model, discussed in detail in Part TWO, where potential misspecifications arise from assuming either normality, or constant variance, or independence. One important difference between the maximum likelihood estimator based on the true probability distribution and the quasi-maximum likelihood estimator is that the usual estimator of the variance derived in Chapter 2 and based on the information matrix equality holding, is no longer appropriate for the quasimaximum likelihood estimator.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call