Abstract

The BayesBinMix package offers a Bayesian framework for clustering binary data with or without missing values by fitting mixtures of multivariate Bernoulli distributions with an unknown number of components. It allows the joint estimation of the number of clusters and model parameters using Markov chain Monte Carlo sampling. Heated chains are run in parallel and accelerate the convergence to the target posterior distribution. Identifiability issues are addressed by implementing label switching algorithms. The package is demonstrated and benchmarked against the Expectation-Maximization algorithm using a simulation study as well as a real dataset.

Highlights

  • Clustering data is a fundamental task in a wide range of applications and finite mixture models are widely used for this purpose (McLachlan and Peel, 2000; Marin et al, 2005; Frühwirth-Schnatter, 2006)

  • The likelihood surface of a mixture model can exhibit many local maxima and it is well known that the EM algorithm may fail to converge to the main mode if it is initialized from a point close to a minor mode

  • This function takes as input a binary data array and runs the allocation sampler for a series of heated chains which run in parallel while swaps between pairs of chains are proposed

Read more

Summary

Introduction

Clustering data is a fundamental task in a wide range of applications and finite mixture models are widely used for this purpose (McLachlan and Peel, 2000; Marin et al, 2005; Frühwirth-Schnatter, 2006). The Bayesian framework allows to put a prior distribution on both the number of clusters as well as the model parameters and (approximately) sample from the joint posterior distribution using Markov chain Monte Carlo (MCMC) algorithms (Richardson and Green, 1997; Stephens, 2000a; Nobile and Fearnside, 2007; White et al, 2016) This does not mean that the Bayesian approach is not problematic. We use the allocation sampler (Nobile and Fearnside, 2007) which introduced this sampling scheme for parametric families such that conjugate prior distributions exist and applied it in the specific context of mixtures of normal distributions This approach was recently followed by White et al (2016) which allowed for variable selection. (a) Propose reallocation of observations assigned to the ejecting component between itself and the ejected component according to the Beta(α , α ) distribution

If an absorption is attempted:
Quantiles for each variable
Method
Summary and remarks

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.