Abstract
The Hamming neural network is an effective tool for solving the problems of recognition and classification of discrete objects whose components are encoded with the binary bipolar alphabet, and the difference between the number of identical bipolar components of the compared objects (vectors images) and the Hamming distance between them (Hamming distance is the number of mismatched bits in the binary vectors being compared) is used as the objects proximity measures. However, the Hamming neural network cannot be used to solve these problems in case the components of the compared objects (vectors) are encoded with the binary alphabet. It also cannot be used to evaluate the affinity (proximity) of objects (binary vectors) with Jaccard, Sokal and Michener, Kulzinsky functions, etc. In this regard, a generalized Hamming neural network architecture has been developed. It consists of two main blocks, which can vary being relatively independent on each other. The first block, consisting of one layer of neurons, calculates the proximity measures of the input image and the reference ones stored in the neuron relations weights of this block. Unlike the Hamming neural network, this block can calculate various proximity measures and signals about the magnitude of these proximity measures from the output of the first block neurons which are followed to the inputs of the second block elements. In the Hamming neural network, the Maxnet neural network is used as the second block, which gives out one maximum signal from the outputs of the first block neurons. If the inputs of the Maxnet network receive not only one but several identical maximum signals, then the second block, and, consequently, the Hamming network, cannot recognize the input vector, which is at the same minimum Hamming distance from two or more reference images stored in the first block. The proposed generalized architecture of the Hamming neural network allows using neural networks instead of the Maxnet network, which can give out not only one but also several identical maximum signals. This allows to eliminate the indicated drawback of the Hamming neural network and to expand the application area of discrete neural networks for solving recognition and classification problems using proximity functions for discrete objects with binary coding of their components.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.