Abstract
This chapter discusses techniques inspired by Bayes decision theory. To the newcomer in the field of pattern recognition the chapter's algorithms and exercises are very important for developing a basic understanding and familiarity with some fundamental notions associated with classification. Most of the algorithms are simple in both structure and physical reasoning. The optimal Bayesian classifier is significantly simplified under the following assumptions: the classes are equiprobable; the data in all classes follow Gaussian distributions; the covariance matrix is the same for all classes; and the covariance matrix is diagonal and all elements across the diagonal are equal. Under these assumptions, it turns out that the optimal Bayesian classifier is equivalent to the minimum Euclidean distance classifier. The Euclidean classifier is often used, even if we know that the previously stated assumptions are not valid, because of its simplicity. It assigns a pattern to the class whose mean is closest to it with respect to the Euclidean norm. When the pdf that describes the data points in a class is not known, it has to be estimated prior to the application of the Bayesian classifier. In this section, we focus on a very popular method to model unknown probability density functions, known as mixture modeling. Nearest neighbor is one of the most popular classification rules, although it is an old technique.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Introduction to Pattern Recognition: A Matlab Approach
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.