Abstract
Combinatorial optimization, such as vehicle routing and traveling salesman problems for graphs, is NP-hard and has been studied for decades. Many methods have been proposed for its possible solution, including, but not limited to, exact algorithms, approximate algorithms, heuristic algorithms, and solution solvers. However, these methods cannot learn the problem’s internal structure nor generalize to similar or larger-scale problems. Recently, deep reinforcement learning has been applied to combinatorial optimization and has achieved convincing results. Nevertheless, the challenge of effective integration and training improvement still exists. In this study, we propose a novel framework (BDRL) that combines BERT (Bidirectional Encoder Representations from Transformers) and deep reinforcement learning to tackle combinatorial optimization over graphs by treating general optimization problems as data points under an identified data distribution. We first improved the transformer encoder of BERT to embed the combinatorial optimization graph effectively. By employing contrastive objectives, we extend BERT-like training to reinforcement learning and acquire self-attention-consistent representations. Next, we used hierarchical reinforcement learning to pre-train our model; that is, to train and fine-tune the model through an iterative process to make it more suitable for a specific combinatorial optimization problem. The results demonstrate our proposed framework’s generalization ability, efficiency, and effectiveness in multiple tasks.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.