Abstract

This paper presents a model selection criterion in a composite likelihood framework based on density power divergence measures and in the composite minimum density power divergence estimators, which depends on an tuning parameter . After introducing such a criterion, some asymptotic properties are established. We present a simulation study and two numerical examples in order to point out the robustness properties of the introduced model selection criterion.

Highlights

  • Composite likelihood inference is an important approach to deal with those real situations of large data sets or very complex models, in which classical likelihood methods are computationally difficult, or even, not possible to manage

  • We have addressed the problem of model selection in the framework of composite likelihood methodology, on the basis of the density power divergence (DPD) as a measure of the closeness of the composite density and the true model that drives the data

  • Thanks to a simulation study, we have shown that the proposed here model selection criterion works well in practice and mainly that the use of composite minimum density power divergence estimator (CMDPDE) makes the criterion more robust than the criteria based on the classic composite maximum likelihood estimator (CMLE)

Read more

Summary

Introduction

Composite likelihood inference is an important approach to deal with those real situations of large data sets or very complex models, in which classical likelihood methods are computationally difficult, or even, not possible to manage. Be a parametric identifiable family of distributions for an observation y = (y1 , ..., ym ) T , a realization of a random m-vector Y. be a parametric identifiable family of distributions for an observation y = (y1 , ..., ym ) T , a realization of a random m-vector Y In this setting, the composite likelihood function based on K different marginal or conditional distributions has the form K CL(θ, y) = ∏. K =1 with Ak (θ, y) = log f Ak (y j , j ∈ Ak ; θ), where { Ak }kK=1 is a family of sets of indices associated either with marginal or conditional distributions involving some y j , j ∈ {1, ..., m} and wk , k = 1, ..., K are non-negative and known weights. If the weights are all equal, they can be ignored. In this case, all the statistical procedures give equivalent results. The composite maximum likelihood estimator (CMLE), b θc , is obtained by maximizing, in respect to θ ∈ Θ, the expression (1)

Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call