Abstract

We've seen that if we want to make progress in complexity, then we need to talk about asymptotics: not which problems can be solved in 10000 steps, but for which problems can instances of size n be solved in cn 2 steps as n goes to infinity? We met TIME( f ( n )), the class of all problems solvable in O( f ( n )) steps, and SPACE( f ( n )), the class of all problems solvable using O( f ( n )) bits of memory. But if we really want to make progress, then it's useful to take an even coarser-grained view: one where we distinguish between polynomial and exponential time, but not between O( n 2 ) and O( n 3 ) time. From this remove, we think of any polynomial bound as “fast,” and any exponential bound as “slow.” Now, I realize people will immediately object: what if a problem is solvable in polynomial time, but the polynomial is n n 50 000 ? Or what if a problem takes exponential time, but the exponential is 1.000 000 01 n ? My answer is pragmatic: if cases like that regularly arose in practice, then it would’ve turned out that we were using the wrong abstraction. But so far, it seems like we’re using the right abstraction. Of the big problems solvable in polynomial time – matching, linear programming, primality testing, etc. – most of them really do have practical algorithms. And of the big problems that we think take exponential time – theorem-proving, circuit minimization, etc. – most of them really don't have practical algorithms. So, that’s the empirical skeleton holding up our fat and muscle.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call