Abstract

A method for designing near-optimal nonlinear classifiers, based on a self-organizing technique for estimating probability density functions when only weak assumptions are made about the densities, is described. The method avoids disadvantages of other existing methods by parametrizing a set of component densities from which the actual densities are constructed. The parameters of the component densities are optimized by a self-organizing algorithm, reducing to a minimum the labeling of design samples. All the required computations are realized with the simple sum-of-product units commonly used in connectionist models. The density approximations produced by the method are illustrated in two dimensions for a multispectral image classification task. The practical use of the method is illustrated by a small speech recognition problem. Related issues of invariant projections, cross-class pooling of data, and subspace partitioning are discussed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.