Abstract

In real life we often have to deal with situations where the sampled observations are independent and share common parameters in their distribution but are not identically distributed. While the methods based on maximum likelihood provide canonical approaches for doing statistical inference in such contexts, it carries with it the usual baggage of lack of robustness to small deviations from the assumed conditions. In the present paper we develop a general estimation method for handling such situations based on a minimum distance approach which exploits the robustness properties of the density power divergence measure (Basu et al. 1998 [2]). We establish the asymptotic properties of the proposed estimators, and illustrate the benefits of our method in case of linear regression.

Highlights

  • The standard and basic problem of statistical inference provides the experimenter with a suitably chosen random sample from a distribution of interest which is appropriately modeled by a parametric family and the experimenter has to estimate the unknown parameters and/or perform tests of hypothesis about them

  • More complex cases are quite frequent in real life, and often the experimenter is faced with the situation where the observations, independent, do not have the same distribution

  • Since the density power divergence is a genuine divergence in the sense that it is nonnegative and attains its minimum if and only if the two arguments are identical, it follows that the functional Tα(G1, . . . , Gn) is Fisher consistent under the assumption of identifiability of the model

Read more

Summary

Introduction

The standard and basic problem of statistical inference provides the experimenter with a suitably chosen random sample from a distribution of interest which is appropriately modeled by a parametric family and the experimenter has to estimate the unknown parameters and/or perform tests of hypothesis about them. 2. The minimum density power divergence (DPD) estimator for independent non-homogeneous observations. Basu et al (1998) [2] introduced the density power divergence family as a measure of discrepancy between two probability density functions and used this family for robustly estimating the model parameter under the usual set up of independent and identically distributed data. We generalize the above concept of robust minimum density power divergence estimation to the case of independent but not identically distributed observations. Differentiating the above with respect to θ we get the estimating equation of the minimum density power divergence estimator for non-homogeneous observations as n. Since the density power divergence is a genuine divergence in the sense that it is nonnegative and attains its minimum if and only if the two arguments are identical, it follows that the functional Tα(G1, . . . , Gn) is Fisher consistent under the assumption of identifiability of the model

Asymptotic properties
Influence function analysis
Breakdown point of the location parameter in a location-scale type model
Application
Asymptotic efficiency
Equivariance of the regression coefficient estimators
Influence function and sensitivities
Breakdown point of the estimator of regression coefficient
Comparison with other methods
Real data examples
Hertzsprung-Russell data of the star cluster
Belgium telephone call data
Salinity data
Concluding remarks
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call