Abstract

In practice, multivariate skew normal mixture (MSNM) models provide a more flexible framework than multivariate normal mixture models, especially for heterogeneous and asymmetric data. For MSNM models, the maximum likelihood estimator often leads to a statistical inference referred to as “badness” under certain properties, because of the unboundedness of the likelihood function and the divergence of shape parameters. We consider two penalties for the log-likelihood function to counter these issues simultaneously in MSNM models. We show that the penalized maximum likelihood estimator is strongly consistent when the putative order of the mixture is equal to or larger than the true order. We also provide penalized expectation-maximization-type algorithms to compute penalized estimates. Finite sample performance is examined through simulations, real data applications, and comparison with existing methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call