Abstract

Maximum weighted matching (MWM), which finds a subset of vertex-disjoint edges with maximum weight, is a fundamental topic with a wide spectrum of applications in various fields, such as biotechnology, social analysis, web services, etc. However, traditional methods cannot achieve a good balance between cost and quality. Inspired by the reinforcement learning techniques, we propose an MWM framework, L2M, with a Deep Reinforcement Learning (DRL) model, which can efficiently and effectively solve the MWM problem on large-scale general graphs. First, in contrast to traditional DRL methods that define the Markov Decision Process (MDP) without considering the efficiency issue, we represent MWM as an MDP that is carefully designed to accelerate computation. In particular, this MDP supports selecting multiple edges for each action and uses a pruning method to reduce search space efficiently. Second, since none of the existing DRL methods can support edge selection on large-scale graphs efficiently, we propose an edge-message-passing network EEN to generate edge embedding. To the best of our knowledge, this is the first attempt that uses DRL model for solving the MWM problem. Experimental results show that L2M outperforms state-of-the-art algorithms and generalizes well on graphs of different sizes and distributions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call