Abstract
between samples. Similarity-based classifiers already exist; classifiers based on generative models already exist. SDA is a new framework for classification comprising classifiers that are both similarity-based and generative. Within the general SDA framework, this chapter describes several families of classifiers: the SDA classifier, the local SDA classifier, and the mixture SDA classifier. The SDA classifier is at the foundation of SDA. It classifies based on the class-conditional generative models of the similarity of the samples to repr esentative class prototypes, or centroids. The SDA framework is introduced, developed, and discussed with the aid of this centroid-based SDA classifier. Then, the centroid-based SDA classifier is generalized beyond class centroids to arbitrary class-descriptive statis tics. Other possible statistics are described, illustrating the power and generality of the SDA framework. The local SDA classifier is a local version of the SDA classifier. It builds similarity-based class-conditional generative models within a neighborhood of a test sample to be classified. The local class models are endowed with low bias and retain the powerful quality of interpretability associated with generative probability models. Local SDA is a consistent classifier, in the sense that its error rate converges to the Bayes error rate, which is the best possible error rate attainable by a classifier. The mixture SDA classifier draws from the well-established metric learning mixture model research. It generalizes the single-centroid SDA classifier to a mixture of single-centro id SDA components. The mixture SDA classifier can be trained with an expectationmaximization (EM) algorithm which parallels the standard EM approach for the wellknown Gaussian mixture models. The problem of classifying samples based only on their pairwise similarities may be divided into two sub-problems: measuring the similarity between samples and classifying the samples based on their pairwise similarities. It is beyond the scope of this chapter to discuss exhaustively and in detail various ways to measure similarity and various similarity-based
Paper version not known (Free)
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.