Abstract

The use of mutual information as a similarity measure in agglomerative hierarchical clustering (AHC) raises an important issue: some correction needs to be applied for the dimensionality of variables. In this work, we formulate the decision of merging dependent multivariate normal variables in an AHC procedure as a Bayesian model comparison. We found that the Bayesian formulation naturally shrinks the empirical covariance matrix towards a matrix set a priori (e.g., the identity), provides an automated stopping rule, and corrects for dimensionality using a term that scales up the measure as a function of the dimensionality of the variables. Also, the resulting log Bayes factor is asymptotically proportional to the plug-in estimate of mutual information, with an additive correction for dimensionality in agreement with the Bayesian information criterion. We investigated the behavior of these Bayesian alternatives (in exact and asymptotic forms) to mutual information on simulated and real data. An encouraging result was first derived on simulations: the hierarchical clustering based on the log Bayes factor outperformed off-the-shelf clustering techniques as well as raw and normalized mutual information in terms of classification accuracy. On a toy example, we found that the Bayesian approaches led to results that were similar to those of mutual information clustering techniques, with the advantage of an automated thresholding. On real functional magnetic resonance imaging (fMRI) datasets measuring brain activity, it identified clusters consistent with the established outcome of standard procedures. On this application, normalized mutual information had a highly atypical behavior, in the sense that it systematically favored very large clusters. These initial experiments suggest that the proposed Bayesian alternatives to mutual information are a useful new tool for hierarchical clustering.

Highlights

  • Cluster analysis aims at uncovering natural groups of objects in a multivariate dataset

  • In the vast variety of methods used in cluster analysis, an agglomerative hierarchical clustering (AHC) is a generic procedure that sequentially merges pairs of clusters that are most similar according to an arbitrary function called similarity measure, thereby generating a nested set of partitions, called hierarchy [2]

  • We focus on the clustering of random variables based on their mutual information, which has recently gained in popularity in cluster analysis, notably in the field of genomics [4,5,6,7] and in functional magnetic resonance imaging data analysis [8,9,10]

Read more

Summary

Introduction

Cluster analysis aims at uncovering natural groups of objects in a multivariate dataset (see [1] for a review). We consider Bayesian model-based clustering [1, 18,19,20] as an alternative to mutual information for the hierarchical clustering of dependent multivariate normal variables. Let us first calculate P(Si[jjMD), the marginal model likelihood under the hypothesis of dependence Expressing this quantity as a function of the model parameters yields Z pðSi[jjMDÞ 1⁄4 pðSi[jjMD; Σi[jÞ pðΣi[jjMDÞ dΣi[j: ð2Þ. Introduction of the model parameters yields for the marginal likelihood Z pðSi[jjMIÞ 1⁄4 pðSi[jjMI; Σi; ΣjÞ pðΣi; ΣjjMIÞ dΣi dΣj: ð6Þ To calculate this integral, we again need to know the likelihood p(Si[jjMI,Æi,Æj) and the prior distribution p(Æi,ÆjjMD) of the two blocks of the covariance matrix under MI. Let us express the Bayesian similarity measure by incorporating Eqs (5) and (10) into Eq (1), yielding "

À d À ln nk þ 1 À d
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call