Abstract

The problem of feature selection in multi-class pattern recognition is viewed as that of a mapping of vector samples from n-dimensional space to that in m-dimensional space (m<n) by a linear transformation. The transformation matrix is obtained for Gaussian distributed pattern classes by maximizing the expected divergence between any pair of pattern classes. Furthermore, for zero mean Gaussian distributed pattern classes an upperbound on the probability of error is obtained in terms of the expected divergence

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call