Abstract

A theorem which establishes the solutions of a given optimization problem as stable points in the state space of single-layer relaxation-type recurrent neural networks is proposed. This theorem establishes the necessary conditions for the neural network to converge to a solution by proposing certain values for the constraint weight parameters of the network. Convergence performance of the discrete Hopfield network with the proposed bounds on constraint weight parameters is tested on a set of constraint satisfaction and optimization problems including the traveling salesman problem, the assignment problem, the weighted matching problem, the N-queens problem and the graph path search problem. Simulation and stability analysis results indicate that the set of solutions becomes a subset of the set of stable points in the state space as a result of the suggested bounds. For the cases of the traveling salesman, assignment and weighted matching problems, two sets are equal leading to convergence to a solution after each relaxation. Convergence to a solution after each relaxation is not guaranteed for the N-queens and the graph path search problems since the solution set is a proper subset of the stable point set. Furthermore the simulation results indicate that the discrete Hopfield network converged to mostly average quality solutions as expected from a gradient-descent search algorithm. In conclusion, the suggested bounds on weight parameters guarantee that the discrete Hopfield network will locate a solution after each relaxation for a class of optimization problems of any size, although the solutions will be average quality rather than optimum.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call