Abstract

The theoretical classification performance of a Hopfield neural network is presented. An important link between empirically based investigations of neural network classification models and the correct application of these models to AI-based systems is established. General expressions are derived relating the performance of the Hopfield model to the number and dimensionality of code vectors stored in memory. The average performance of the network is analyzed by randomizing the subsequent code vectors and examining classification relative to the output bit errors. An exact probabilistic description of the network is derived for the first iteration, and an approximate second-moment analysis generalizable to multiple iterations examines performance near a fixed point. Degradations generated by noisy or incomplete input data are analyzed. The results show that the Hopfield net has major limitations when applied to fixed pattern classification problems because of its sensitivity to the number of code vectors stored in memory and the signal-to-noise ratio of the input data. >

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call