Abstract

Hopfield neural networks (HNNs) are one of the most well-known and widely used kinds of neural networks in optimization. In this article, the author focuses on building a deeper understanding of the working principle of the HNN during an optimization process. Our investigations yield several novel results giving some important insights into the working principle of both continuous and discrete HNNs. This article shows that what the traditional HNN actually does as energy function decreases is to divide the neurons into two classes in such a way that the sum of biased class volumes is minimized (or maximized) regardless of the types of the optimization problems. Introducing neuron-specific class labels, the author concludes that the traditional discrete HNN is actually a special case of the greedy asynchronous distributed interference avoidance algorithm (GADIA) [17] of Babadi and Tarokh for the 2-class optimization problems. The computer results confirm the findings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call