Abstract

The SAT Competitions constitute a well-established series of yearly open international algorithm implementation competitions, focusing on the Boolean satisfiability (or propositional satisfiability, SAT) problem. In this article, we provide a detailed account on the 2020 instantiation of the SAT Competition, including the new competition tracks and benchmark selection procedures, overview of solving strategies implemented in top-performing solvers, and a detailed analysis of the empirical data obtained from running the competition.

Highlights

  • From what was once mainly the archetypal intractable problem, propositional satisfiability has flourished into a success story of modern computer science [1]

  • We provide a detailed account of SAT Competition 2020 in terms of organizational details, competition tracks, participating solvers, benchmarks, and the empirical results from the competition

  • In terms of empirical results we provide further analysis on the competition results, going beyond the standard rankings provided on the SAT competition web pages

Read more

Summary

Introduction

From what was once mainly the archetypal intractable (in particular NP-complete) problem, propositional satisfiability (or Boolean satisfiability, SAT) has flourished into a success story of modern computer science [1] This is due to advances in SAT solvers, i.e., implementations of decision procedures for SAT, which today form a central computational tool for solving realworld problem instances of various kinds of NP-hard search and optimization problems. This article focuses on the 2020 instantiation of the SAT Competitions To this end, we provide a detailed account of SAT Competition 2020 in terms of organizational details, competition tracks, participating solvers, benchmarks, and the empirical results from the competition. We start by providing an overview on the competition, including details on and motivations for the several competition tracks, the rules and other technical requirements of the competition, the ranking schemes used in evaluating the competing solvers, and the computing environments used for executing the competition (Section 2).

Competition tracks
Mandatory participation requirements
Solver ranking and disqualification
Certificates
Computing environments
Benchmarks
Selection of instances
Result
Planning instances
Incremental Library track benchmarks
Competition results
Main track
Planning track
Parallel track
Cloud track
Sequential SAT solvers
Parallel SAT solvers
Massively parallel SAT solvers in the Cloud track
Contributions to the Virtual Best Solver
Greedy set cover
Time-limited schedules
Small portfolios
Score per instance family
Similarity of solvers
Influence of benchmark selection on solver ranking
Conclusion and prospects
Findings
Prospects
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call