Abstract
Recently, we have proposed a physically-inspired graph-theoretical method, called the Nearest Descent (ND), which is capable of organizing a dataset into an in-tree graph structure. Due to some beautiful and effective features, the constructed in-tree proves well-suited for data clustering. Although there exist some undesired edges (i.e., the inter-cluster edges) in this in-tree, those edges are usually very distinguishable, in sharp contrast to the cases in the famous Minimal Spanning Tree (MST). Here, we propose another graph-theoretical method, called the Hierarchical Nearest Neighbor Descent (HNND). Like ND, HNND also organizes a dataset into an in-tree, but in a more efficient way. Consequently, HNND-based clustering (HNND-C) is more efficient than ND-based clustering (ND-C) as well. This is well proved by the experimental results on five high-dimensional and large-size mass cytometry datasets. The experimental results also show that HNND-C achieves overall better performance than some state-of-the-art clustering methods.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.