Abstract

For NP-hard combinatorial optimization problems, it is usually challenging to find high-quality solutions in polynomial time. Designing either an exact algorithm or an approximate algorithm for these problems often requires significantly specialized knowledge. Recently, deep learning methods have provided new directions to solve such problems. In this paper, an end-to-end deep reinforcement learning framework is proposed to solve this type of combinatorial optimization problems. This framework can be applied to different problems with only slight changes of input, masks, and decoder context vectors. The proposed framework aims to improve the models in literacy in terms of the neural network model and the training algorithm. The solution quality of TSP and the CVRP up to 100 nodes are significantly improved via our framework. Compared with the best results of the state-of-the-art methods, the average optimality gap is reduced from 4.53% to 3.67% for TSP with 100 nodes and from 7.34% to 6.68% for CVRP with 100 nodes when using the greedy decoding strategy. Besides, the proposed framework can be used to solve a multi-depot CVRP case without any structural modification. Furthermore, our framework uses about 1/3∼3/4 training samples compared with other existing learning methods while achieving better results. The results performed on randomly generated instances, and the benchmark instances from TSPLIB and CVRPLIB confirm that our framework has a linear running time on the problem size (number of nodes) during training and testing phases and has a good generalization performance from random instance training to real-world instance testing.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.