Abstract

There is a surge of interests in recent years to develop graph neural network (GNN) based learning methods for the NP-hard traveling salesman problem (TSP). However, the existing methods not only have limited search space but also require a lot of training instances with ground-truth solutions that are time-consuming to compute. In this paper, we propose a deep bidirectional competitive learning method to address the above issues. The search space is expanded by training multiple weak but complementary models simultaneously, while the sample effiency is significantly improved by devising a gap-aware reweighting scheme over the TSP instances. Specifically, TSP is modeled in a one-by-one construction way by a GNN to assist the heuristic search. Weights are relatively increased for the instances with larger gap between the search algorithm’s solution and the optimal one. The reweighted training set are pipelined to train the next TSP model with strength on the error part. With the error feedback from the search component, multiple complementary GNNs are obtained using this bidirectional alternations. Finally, we present a simple competing strategy by taking the minimum length of the predictions using the multiple TSP models. Experimental results indicate that our method achieves good generalization.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call