Abstract

The central purpose of this paper is to put together the basic ideas of two separate theories - the theory of ordinal clustering, as developed by Janowitz et al., and the theory of probabilistic metric spaces, as developed by Schweizer et al. The principal result is a new theory of clustering, called percentile clustering, in which the clustering is based, not on some average or other typical value of the data, but directly on the distributed data itself. A secondary outgrowth is a generalized theory of ordinal clustering. The paper begins with a brief but essentially self-contained exposition of the main ideas of the two theories mentioned above. It then combines them to lay the foundations of the theory of percentile clustering. It goes on to develop a number of specific algorithms with a small, artificial data set. The paper concludes by applying the new cluster methods to two concrete examples. The first of these is a data set concerning combat deaths in the Vietnam War; the second is a data set supplied by N. Creel and dealing with the classification of species of gibbons. In both instances results obtained with various standard clustering techniques are also presented.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.