Abstract

The challenge to solve worst-case intractable computational problems lies at the core of much of the work in the constraint programming community. The traditional approach in computer science towards hard computational tasks is to identify subclasses of problems with interesting, tractable structure. Linear programming and network flow problems are notable examples of such well structured classes. Propositional Horn theories are also a good example from the domain of logical inference. However, it has become clear that many real-world problem domains cannot be modeled adequately in such well-defined tractable formalisms. Instead richer, worst-case intractable formalisms are required. For example, planning problems can be captured in general propositional theories and related constraint formalisms and many hardware and software verification problems can similarly be reduced to Boolean satisfiability problems. Despite the use of such inherently worst-case intractable representations, ever larger real-world problem instances are now being solved quite effectively. Recent state-of-the-art satisfiability (SAT) and constraint solvers can handle hand-crafted instances with hundreds of thousands of variables and constraints. This strongly suggests that worst-case complexity is only part of the story. I will discuss how notions of typical case and average case complexity can lead to more refined insights into the study and design of algorithms for handling real-world computationally hard problems. We will see that such insights result from a cross-fertilization of ideas from different communities, in particular, statistical physics, computer science, and combinatorics. Work supported in part by the Intelligent Information Systems Institute at Cornell University sponsored by AFOSR (F49620-01-1-0076) and an NSF ITR grant (IIS-0312910).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call