Abstract

The quality of a cluster analysis of unlabeled units depends on the quality of the between units dissimilarity measures. Data‐dependent dissimilarity is more objective than data independent geometric measures such as Euclidean distance. As suggested by Breiman, many data driven approaches are based on decision tree ensembles, such as a random forest (RF), that produce a proximity matrix that can easily be transformed into a dissimilarity matrix. An RF can be obtained using labels that distinguish units with real data from units with synthetic data. The resulting dissimilarity matrix is input to a clustering program and units are assigned labels corresponding to cluster membership. We introduce a general iterative cluster (GIC) algorithm that improves the proximity matrix and clusters of the base RF. The cluster labels are used to grow a new RF yielding an updated proximity matrix, which is entered into the clustering program. The process is repeated until convergence. The same procedure can be used with many base procedures such as the extremely randomized tree ensemble. We evaluate the performance of the GIC algorithm using benchmark and simulated data sets. The properties measured by the Silhouette score are substantially superior to the base clustering algorithm. The GIC package has been released in R: https://cran.r‐project.org/web/packages/GIC/index.html.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call