Abstract

Many combinatorial optimization problems can be phrased in the language of constraint satisfaction problems. We introduce a graph neural network architecture for solving such optimization problems. The architecture is generic; it works for all binary constraint satisfaction problems. Training is unsupervised, and it is sufficient to train on relatively small instances; the resulting networks perform well on much larger instances (at least 10-times larger). We experimentally evaluate our approach for a variety of problems, including Maximum Cut and Maximum Independent Set. Despite being generic, we show that our approach matches or surpasses most greedy and semi-definite programming based algorithms and sometimes even outperforms state-of-the-art heuristics for the specific problems.

Highlights

  • Constraint satisfaction is a general framework for casting combinatorial search and optimization problems; many well-known NP-complete problems, for example, k-colorability, Boolean satisfiability and maximum cut can be modeled as constraint satisfaction problems (CSPs)

  • Our networks learn to satisfy the maximum number of constraints which naturally puts the focus on the optimization version MAX-CSP of the constraint satisfaction problem

  • We experimentally evaluate our approach on the following NP-hard problems: the maximum 2-satisfiability problem (MAX2-SAT), which asks for an assignment maximizing the number of satisfied clauses for a given Boolean formula in 2-conjunctive normal form; the maximum cut problem (MAX-CUT), which asks for a partition of a graph in two parts such that the number of edges between the parts is maximal; the 3colorability problem (3-COL), which asks for a 3-coloring of the vertices of a given graph such that the two endvertices of each edge have distinct colors

Read more

Summary

INTRODUCTION

Constraint satisfaction is a general framework for casting combinatorial search and optimization problems; many well-known NP-complete problems, for example, k-colorability, Boolean satisfiability and maximum cut can be modeled as constraint satisfaction problems (CSPs). Our networks learn to satisfy the maximum number of constraints which naturally puts the focus on the optimization version MAX-CSP of the constraint satisfaction problem. This focus on the optimization problem allows us to train unsupervised, which is a major point of distinction between our work and recent neural approaches to Boolean satisfiability (Selsam et al, 2019) and the coloring problem (Lemos et al, 2019). MAX-IS is not a maximum constraint satisfaction problem, because its objective is not to maximize the number of satisfied constraints, but to satisfy all constraints while maximizing the number of variables with a certain value We include this problem to demonstrate that our approach can be adapted to such related problems. For medium-sized instances with 10,000 constraints inference takes less than 5 s

Related Work
CONSTRAINT SATISFACTION PROBLEMS
Architecture
Loss Function
EXPERIMENTS
Maximum 2-Satisfiability
Regular Graphs
Weighted Maximum Cut Problem
Coloring
Independent Set
CONCLUSIONS
Future Work
Findings
DATA AVAILABILITY STATEMENT
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call