Backtracking combined with branching heuristics is a prevalent approach for tackling constraint satisfaction problems (CSPs) and combinatorial optimization problems (COPs). While branching heuristics specifically designed for certain problems can be theoretically efficient, they are often complex and difficult to implement in practice. On the other hand, general branching heuristics can be applied across various problems, but at the risk of suboptimality. We introduce a solver framework that leverages the Shannon entropy in branching heuristics to bridge the gap between generality and specificity in branching heuristics. This enables backtracking to follow the path of least uncertainty, based on probability distributions that conform to problem constraints. We employ graph neural network (GNN) models with loss functions derived from the probabilistic method to learn these probability distributions. We have evaluated our approach by its applications to two NP-hard problems: the (minimum) dominating-clique problem and the edge-clique-cover problem. Compared with the state-of-the-art solvers for both problems, our solver framework outputs competitive results. Specifically, for the (minimum) dominating-clique problem, our approach generates fewer branches than the solver presented by Culberson et al. (2005). For the edge-clique-cover problem, our approach produces smaller-sized edge clique covers (ECCs) than the solvers referenced by Conte et al. (2020) and Kellerman (1973).
Read full abstract