Abstract

Multiple known algorithmic paradigms (backtracking, local search and the polynomial method) only yield a \(2^{n(1-1/O(k))}\) time algorithm for k-SAT in the worst case. For this reason, it has been hypothesized that the worst-case k-SAT problem cannot be solved in \(2^{n(1-f(k)/k)}\) time for any unbounded function f. This hypothesis has been called the “Super-Strong ETH”, modeled after the ETH and the Strong ETH. We give two results on the Super-Strong ETH: 1. It has also been hypothesized that k-SAT is hard to solve for randomly chosen instances near the “critical threshold”, where the clause-to-variable ratio is \(2^k \ln 2-\varTheta (1)\). We give a randomized algorithm which refutes the Super-Strong ETH for the case of random k-SAT and planted k-SAT for any clause-to-variable ratio. For example, given any random k-SAT instance F with n variables and m clauses, our algorithm decides satisfiability for F in \(2^{n(1-\varOmega (\log k)/k)}\) time, with high probability (over the choice of the formula and the randomness of the algorithm). It turns out that a well-known algorithm from the literature on SAT algorithms does the job: the PPZ algorithm of Paturi, Pudlak and Zane [17]. 2. The Unique k-SAT problem is the special case where there is at most one satisfying assignment. Improving prior reductions, we show that the Super-Strong ETHs for Unique k-SAT and k-SAT are equivalent. More precisely, we show the time complexities of Unique k-SAT and k-SAT are very tightly correlated: if Unique k-SAT is in \(2^{n(1-f(k)/k)}\) time for an unbounded f, then k-SAT is in \(2^{n(1-f(k)(1-{\varepsilon })/k)}\) time for every \({\varepsilon }> 0\).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call