Abstract

The Dirichlet Process (DP) mixture model has become a popular choice for model-based clustering, largely because it allows the number of clusters to be inferred. The sequential updating and greedy search (SUGS) algorithm (Wang & Dunson, 2011) was proposed as a fast method for performing approximate Bayesian inference in DP mixture models, by posing clustering as a Bayesian model selection (BMS) problem and avoiding the use of computationally costly Markov chain Monte Carlo methods. Here we consider how this approach may be extended to permit variable selection for clustering, and also demonstrate the benefits of Bayesian model averaging (BMA) in place of BMS. Through an array of simulation examples and well-studied examples from cancer transcriptomics, we show that our method performs competitively with the current state-of-the-art, while also offering computational benefits. We apply our approach to reverse-phase protein array (RPPA) data from The Cancer Genome Atlas (TCGA) in order to perform a pan-cancer proteomic characterisation of 5157 tumour samples. We have implemented our approach, together with the original SUGS algorithm, in an open-source R package named sugsvarsel, which accelerates analysis by performing intensive computations in C++ and provides automated parallel processing. The R package is freely available from: https://github.com/ococrook/sugsvarsel.

Highlights

  • Bayesian nonparametric methods have become commonplace in the statistics and machine learning literature due to their flexibility and wide applicability

  • For sequential updating and greedy search (SUGS) we choose the partition with maximal pseudo-marginal likelihood (PML), while for SUGSVarSel we select the result with maximal marginal likelihood (ML)

  • The algorithm selects a total of 92 genes, including TCL1, TCRB, IL8, EPB72, IL7R, TCRG, NFIL6, which are all known to be associated with leukaemia (Natsuka et al, 1992; Pekarsky, Hallas & Croce, 2001; Van der Velden et al, 2004; Kuett et al, 2015; Chen, Tsau & Lin, 2010; Shochat et al, 2011)

Read more

Summary

Introduction

Bayesian nonparametric methods have become commonplace in the statistics and machine learning literature due to their flexibility and wide applicability. By introducing latent selection indicators, these models can be extended to perform variable selection for clustering (Kim, Tadesse & Vannucci, 2006), which is relevant in high-dimensional settings (Law, Figueiredo & Jain, 2004; Constantinopoulos, Titsias & Likas, 2006). There are several approaches for modelbased clustering and variable selection (see Fop & Murphy, 2018, for a recent review), but current Markov chain Monte Carlo (MCMC) algorithms for Bayesian inference in Dirichlet process (DP) mixture models Neal, 2000; Jain & Neal, 2004) are computationally costly, and often infeasible for large datasets. A number of algorithms have been proposed for fast approximate inference in DP and related mixture models, which make possible the analysis of datasets with large numbers of observations.

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call