Abstract

A review of recent patent applications indicates that neural networks using Hamming-type algorithms with minimum-mismatch selection provide an optimal combination of implementational simplicity, information storage capacity and signal-noise characteristics. These networks can be adapted to implement Bayes' rule, by setting link gains to the negative logarithm of conditional or a priori probabilities. Where probability distributions and noise are not uniform or random, the performance of Bayesian classifiers may be significantly better than that of the corresponding Hamming network on the same vector set. We demonstrate this for the noisy digit classification task. We also generate biologically plausible curvature detectors for character recognition and compare the performances of Bayesian and Hamming networks at classifying the resultant vectors. Preliminary results suggest that Hamming networks may provide good approximations to the Bayes optimum for sparse natural vector sets under some conditions.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.