Abstract

Finite mixtures normal regression (FMNR) models are widely used to investigate the relationship between a response variable and a set of explanatory variables from several unknown latent homogeneous groups. However, the classical EM algorithm and Gibbs sampling to deal with this model have several weak points. In this paper, a non-iterative sampling algorithm for fitting FMNR model is proposed from a Bayesian perspective. The procedure can generate independently and identically distributed samples from the posterior distributions of the parameters and produce more reliable estimations than the EM algorithm and Gibbs sampling. Simulation studies are conducted to illustrate the performance of the algorithm with supporting results. Finally, a real data is analyzed to show the usefulness of the methodology.

Highlights

  • Inference for Finite Mixture RegressionFinite mixtures regression (FMR) models are powerful statistical tools to explore the relationship between a response variable and a set of explanatory variables from several latent homogeneous groups

  • The classical methods to deal with these mixture models are mainly based on Gibbs sampling for Bayesian analysis and EM algorithm (Dempster [14]) for finding the maximum likelihood estimator (MLE) from frequentist perspective, and the crucial technique in these methods is to employ a group of latent variables to indicate the group an observation belongs to, and formulate a missing data structure

  • To investigate the performance of the proposed algorithm, we conduct some simulations under different situations by means of four criteria such as the mean, mean square error, mean absolute deviance, and the coverage probability to evaluate the method by comparison with the classical EM and Gibbs sampling

Read more

Summary

Introduction

Finite mixtures regression (FMR) models are powerful statistical tools to explore the relationship between a response variable and a set of explanatory variables from several latent homogeneous groups. The classical methods to deal with these mixture models are mainly based on Gibbs sampling for Bayesian analysis and EM algorithm (Dempster [14]) for finding the maximum likelihood estimator (MLE) from frequentist perspective, and the crucial technique in these methods is to employ a group of latent variables to indicate the group an observation belongs to, and formulate a missing data structure. One effective approach to deal with mixture models is to introduce a group of latent variables which establishes a missing data structure, in this way, some data augmentation algorithms, such as EM algorithm and Gibbs sampling can be performed, besides, by taking advantage of the missing data structure, a new non-iterative sampling algorithm can be smoothly carried out. The above complete likelihood is crucial to implement EM algorighm, Gibbs sampling and IBF algorithm

The Prior and Conditional Distributions
IBF Sampler
Simulation Studies
Real Data Analysis
Algorithm Selection
Discussion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call