Abstract

Abstract This paper represents TSP (Travelling Salesman Problem) by using Artificial Neural Networks.A comparative study of various methods of ANN is shown here for solving TSP problem.The Travelling Salesman Problem is a classical combinational optimization problem, which is a simple to state but very difficult to solve. This problem is to find the shortest possible tour through a set of N vertices so that each vertex is visited exactly once. TSP can be solved by Hopfield Network, Self-organization Map, and Simultaneous Recurrent Network. Hopfield net is a fully connected network, where every vertex is connected with each other forwardly and backwardly. So starting the walk from a vertex we can travel all the other vertex exactly once and return to starting vertex again.

Highlights

  • A traveling sales person must visit a number of cities, each only once

  • Since no external inputs exist, the network is initialized with small random values for the weights and the outputs z and allowed to relax.After the outputs of the network converge to a fixed point, the outputs can be compared against problem-specific error function and the weights modified using a suitable learning algorithm. the architecture of the Simultaneous Recurrent Neural Network (SRN) for the traveling salesman problem (TSP), where the feed forward network F consists of one hidden layer and one output layer

  • For each neuron Zij the index m search each neuron in the (j+1)st column, indicated by the Zm(j+1) term. If both neurons are active, the distance from city m,dim will be included in this error term, where the minimum value is achieved if the total distance of the path is minimum

Read more

Summary

Introduction

A traveling sales person must visit a number of cities, each only once. Moving from one city to other have a cast e.g. the intercity distance associated the cost/distance traveled must be minimized. Since no external inputs exist, the network is initialized with small random values for the weights and the outputs z and allowed to relax.After the outputs of the network converge to a fixed point, the outputs can be compared against problem-specific error function and the weights modified using a suitable learning algorithm. ∑Nm=1zmj(∞) where i and j are the indices for rows and columns, respectively, m is the index for rows of the network, ( ∞)mj z is the stable value of mj-th neuron output upon convergence to a fixed point, and gcol is a positive real weight parameter. ∑Nn=11znj(∞)]2 where, i and j are the indices for rows and columns, respectively, of the network, n is the index for columns and g row is a positive real weight parameter This error term will have a value of zero when each row of the output matrix has exactly one active neuron. By calculating the fourth and the last portion Edis we can find the minimum distance of the traveling path

Result & Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.