Unlike its cousin, the Euclidean Traveling Salesman Problem (TSP), to the best of our knowledge, there has been no documented all-neural solution to the Euclidean Hamiltonian Path Problem (HPP). The reason for this is the fact that the heuristics which map the cities onto the neurons “lose their credibility” because the underlying cyclic property of the order of the neurons used in the TSP is lost in the HPP. In this paper we present three neural solutions to the HPP. The first of these, GSOM_HPP, is a generalization of Kohonen's self-organizing map (SOM) as modified by Angéniol et al. (Neural Networks 1988;1:289–93). The second and third methods use the recently-introduced self-organizing neural network, the Kohonen Network Incorporating Explicit Statistics (KNIES) (Oommen et al., Proceedings of WIRN/VIETRI-98, the Tenth Italian Workshop on Neural Nets, Vietri Sul Mare, Italy, May 1998. p. 273–282). The primary difference between KNIES and Kohonen's SOM is the fact that unlike SOM, every iteration in the training phase includes two distinct modules – the attracting module and the dispersing module. As a result of SOM and the dispersing module introduced in KNIES the neurons individually find their places both statistically and topologically, and also collectively maintain their mean to be the mean of the data points which they represent. The new philosophy, which has previously (Oommen et al. Proceedings of WIRN/VIETRI-98, the Tenth Italian Workshop on Neural Nets, Vietri Sul Mare, Italy, May 1998. p. 273–282) been used to effectively solve the Euclidean Traveling Salesman Problem (TSP), is now extended to solve the Euclidean Hamiltonian Path (HPP). These algorithms which are the first all-neural solutions to the HPP, have also been rigorously tested. Experimental results for problems obtained by modifying selected instances from the traveling salesman problem library (TSPLIB) (Reinett. ORSA Journal on Computing 1991;3:376–84) for the HPP indicate that they are both accurate and efficient. Apart from the computational results presented, the paper also contains a systematic strategy by which the quality of any HPP algorithm can be quantified. Scope and purpose Over the past two decades an enormous amount of work has been done in designing neural networks (NNs) which utilize a variety of learning principles. There are many works in the literature that are noteworthy in the context, we list only a few like [8,62], which describe the various families of NNs, and how their learning compares to underlying biological learning models. In this paper we concentrate our attention on Kohonen's Self-Organizing Map (SOM) [21]. The SOM has been used in solving certain optimization problems such as the Euclidean Traveling Salesman Problem [3,4] which has been one of the oldest “hard nuts” of Operations Research and Mathematical Programming. Any algorithm devised to solve the TSP tries to answer the following question: Given a set of N cities and distances for each pair of cities what is the shortest tour that visits each city exactly once? The beauty of the SOM is the fact that the individual neurons adaptively tend to learn the properties of the underlying distribution of the space in which they operate. Additionally, they also tend to learn their places topologically. This feature is particularly important for problems which involve two and three-dimensional physical spaces, and is indeed, the principal motivation for the SOM being used in solving the TSP [7,8]. More recently, we have added to this collection of methods a scheme which uses the recently introduced self-organizing neural network, the Kohonen Network Incorporating Explicit Statistics (KNIES) [9]. The primary difference between KNIES and the SOM is that, in KNIES, the neurons not only individually find their places statistically and topologically, but also collectively maintain their mean to be the mean of the data points which they represent. The Euclidean Hamiltonian Path Problem (HPP) is closely related to the TSP: Given a set { X i :1⩽ i⩽ N} of cities with starting and terminal cities X s and X t, respectively, and distances for each pair of cities, what is the shortest path that starts at X s, terminates at X t, respectively, and visits each city exactly once? There are only a few independent solutions to the HPP; indeed most solutions utilize the underlying solution to the TSP. In this paper we try to solve the HPP without resorting to an underlying TSP solution method, and adapt our new NN methods, KNIES. Our new neural algorithms are the first all-neural solutions to the HPP to our knowledge, and experimental results indicate that they are accurate and efficient.
Read full abstract