Abstract

Hopfield-type networks convert a combinatorial optimization to a constrained real optimization and solve the latter using the penalty method. There is a dilemma with such networks: when tuned to produce good-quality solutions, they can fail to converge to valid solutions; and when tuned to converge, they tend to give low-quality solutions. This paper proposes a new method, called the augmented Lagrange-Hopfield (ALH) method, to improve Hopfield-type neural networks in both the convergence and the solution quality in solving combinatorial optimization. It uses the augmented Lagrange method, which combines both the Lagrange and the penalty methods, to effectively solve the dilemma. Experimental results on the travelling salesman problem (TSP) show superiority of the ALH method over the existing Hopfield-type neural networks in the convergence and solution quality. For the ten-city TSPs, ALH finds the known optimal tour with 100% success rate, as the result of 1000 runs with different random initializations. For larger size problems, it also finds remarkably better solutions than the compared methods while always converging to valid tours.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.