Abstract

This paper presents partitioning hard kernel clustering methods in which dissimilarity measures are obtained as sums of squared Euclidean distances between patterns and centroids computed individually for each variable by means of kernel functions. The advantage of the proposed approach over the conventional kernel clustering methods is that it allows to learn the weights of the variables during the clustering process, improving the performance of the algorithms. Another advantage of this approach is that it allows the introduction of various partition and cluster interpretations tools. Experiments with benchmark data sets illustrate the usefulness of our algorithms and the merit of the partition and cluster interpretation tools.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.