Abstract

The fully connected topology, which coordinates the connection of each neuron with all other neurons, remains the most commonly used structure in Hopfield-type neural networks. However, fully connected neurons may form a highly complex network, resulting in a high training cost and making the network biologically unrealistic. Biologists have observed a small-world topology with sparse connections in the actual brain cortex. The bionic small-world neural network structure has inspired various application scenarios. However, in previous studies, the long-range wirings in the small-world network have been found to cause network instability. In this study, we investigate the influence of neural network training on the small-world topology. The role of the path length and clustering coefficient of neurons is expounded in the neural network training process. We employ Watt and Strogatz’s small-world model as the topology for the Hopfield neural network and conduct computer simulations. We observe that the random existence of neuron connections may cause unstable network energies and generate oscillations during the training process. A new method is proposed to mitigate the instability of small-world networks. The proposed method starts with a neuron as the pattern centroid along the radial, which arranges its wirings in compliance with the Gaussian distribution. The new method is tested on the MNIST handwritten digit dataset. The simulation confirms that the new small-world series has higher stability in terms of the learning accuracy and a higher convergence speed compared with Watt and Strogatz’s small-world model.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call