Abstract

In this article, we study the reinforcement learning (RL) for vehicle routing problems (VRPs). Recent works have shown that attention-based RL models outperform recurrent neural network-based methods on these problems in terms of both effectiveness and efficiency. However, existing RL models simply aggregate node embeddings to generate the context embedding without taking into account the dynamic network structures, making them incapable of modeling the state transition and action selection dynamics. In this work, we develop a new attention-based RL model that provides enhanced node embeddings via batch normalization reordering and gate aggregation, as well as dynamic-aware context embedding through an attentive aggregation module on multiple relational structures. We conduct experiments on five types of VRPs: 1) travelling salesman problem (TSP); 2) capacitated VRP (CVRP); 3) split delivery VRP (SDVRP); 4) orienteering problem (OP); and 5) prize collecting TSP (PCTSP). The results show that our model not only outperforms the learning-based baselines but also solves the problems much faster than the traditional baselines. In addition, our model shows improved generalizability when being evaluated in large-scale problems, as well as problems with different data distributions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call