Abstract
One technique that uses Wang’s Recurrent Neural Networks with the “Winner Takes All” principle is presented to solve two classical problems of combinatorial optimization: Assignment Problem (AP) and Traveling Salesman Problem (TSP). With a set of appropriate choices for the parameters in Wang’s Recurrent Neural Network, this technique appears to be efficient in solving the mentioned problems in real time. In cases of solutions that are very close to each other or multiple optimal solutions to Assignment Problem, the Wang’s Neural Network does not converge. The proposed technique solves these types of problems by applying the “Winner Takes All” principle to Wang’s Recurrent Neural Network, and could be applied to solve the Traveling Salesman Problem as well. This application to the Traveling Salesman Problem can easily be implemented, since the formulation of this problem is the same that of the Assignment Problem, with the additional constraint of Hamiltonian circuit. Comparisons between some traditional ways to adjust parameters of Recurrent Neural Networks are made, and some proposals concerning to parameters with dispersion measures of the cost matrix coefficients to the Assignment Problem are shown. Wang’s Neural Network with principle Takes performs only 1% of the average number of iterations of Wang’s Neural Network without this principle. In this work 100 matrices with dimension varying of 3×3 to 20×20 are tested to choose the better combination of parameters to Wang’s recurrent neural network. When the Wang’s Neural Network presents feasible solutions for the Assignment Problem, the Winner Takes All principle is applied to the values of the Neural Network’s decision variables, with the additional constraint that the new solution must form a feasible route for the Traveling Salesman Problem. The results from this new technique are compared to other heuristics, with data from the TSPLIB (Traveling Salesman Problem Library). The 2-opt local search technique is applied to the final solutions of the proposed technique and shows a considerable improvement of the results. The results of problem “dantzig42” of TSPLIB and an example with some iterations of technique proposed in this work are shown. This work is divided in 11 sections, including this introduction. In section 2, the Assignment Problem is defined. In section 3, the Wang’s recurrent neural network is presented and a O pe n A cc es s D at ab as e w w w .ite ch on lin e. co m
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.