This paper processes a novel hierarchical high-dimensional clustering algorithm based on the Active Learning Method (ALM), which is a fuzzy-learning algorithm. The hierarchical part of the algorithm is composed of two phases: divisible and agglomerative. The divisible phase, a zooming-in-process, searches for sub-clusters in already-found clusters hierarchically. At each level of the hierarchy, the clusters are found by an ensemble clustering method based on the density of data. This part of the algorithm blurs each data point as multiple one-dimensional fuzzy membership functions called ink-drop patterns; then, it accumulates the ink-drop patterns of all data points on every dimension separately. Next, it performs one-dimensional density partitioning to produce an ensemble of clustering solutions; after that, combining the results is done based on a novel consensus method with the aid of prime numbers. An agglomerative phase is a bottom-up approach that merges clusters based on a novel distance metric, named K2\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{mathrsfs} \\usepackage{upgreek} \\setlength{\\oddsidemargin}{-69pt} \\begin{document}$${K}^{2}$$\\end{document}-nearest neighbor. The algorithm is named as the Hierarchical High-Dimensional Unsupervised Active Learning Method (HiDUALM) and is explained in more detail throughout this paper. Although the classical clustering methods are not suitable for high-dimensional data clustering, the proposed method solves the problems related to speed and memory using ensemble learning, while due to its hierarchy and the use of different distance criteria, different levels of the cluster provide the clause. Experiments on synthetic and real-world datasets are presented to show the effectiveness of the proposed-clustering algorithm.
Read full abstract