Abstract

Dimensionality reduction is one way to reduce the computational load before analysis is attempted on massive high-dimensional data sets. It would be beneficial to have dimensionality reduction methods where the transformation can be updated recursively based on either known or partially identified data. This paper documents some of our recent work in dimensionality reduction that has applications to real-time automatic pattern recognition systems. Fisher's Linear Discriminant (FLD) is one method of reducing the dimensionality in pattern recognition applications where the covariances of each target group are the same. We develop two recursive versions of the FLD that are appropriate for the two-class case. The first is based on the assumption that it is known which class each new data point belongs to. This could be used with massive data sets where each observation is labeled with the true class and must be processed as it is obtained to build the classifiers. The other version recursively updates the FLD based on partially classified data. The FLD and other reduction methods such as principal component analysis offer global dimensionality reduction within the framework of linear algebra applied to covariance matrices. In this presentation, we describe local methods that use both mixture models and nearest neighbor calculations to construct local versions of these methods. These new versions for local dimensionality reduction provide increased classification accuracy in lower dimensions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call