Abstract

In this work, we study generalization in reinforcement learning for traveling salesman problem (TSP). While efforts have been made for designing deep reinforcement learning-based solvers to achieve near optimal results in small tasks, it is still an open problem to apply such solvers to larger-scale tasks by retaining performance. In this research, we learn the shared dynamics in TSP environments based on multi-task learning, which can be generalized to new tasks. To accurately estimate such dynamics, we consider leveraging the node visitation information. Besides designing RL-based models to attentively aggregate the visitation information during decision making, we propose a scheduled data utilization strategy to stabilize learning with various problem sizes. The experimental result shows that our model achieves improved generalizability for unseen larger TSPs in both zero-shot and few-shot settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call