We are concerned with programs for computing functions, and the running times of these programs as measured by “step-counting” functions. The notions of “programs” and “step-counting” functions are treated axiomatically, so the theorems are machine-independent. In particular, we are interested in 0–1 valued recursive function f and their complexity lower bounds F. Say theF(x) is a lower bound on the number of steps to compute f(x), if every program that computes f(x) takes at least F(x) steps for almost all x. In other words, relative to F, f is difficult to compute on all but a finite number of arguments, no matter what program is used to compute the function. It is known that for a large class of such functions, there is an effective procedure with the following property: Given any index for the function, one can calculate the upper bound on the number of arguments on which the function is easy to compute. We exhibit another large class of functions for which it is not possible to effectively compute this upper bound. Further, one can never effectively go from the index of any function to a bound on the size of the arguments on which the function is easy to compute. On the other hand, we show that for some functions, f, any lower bound F(x) on the number of steps to compute f(x) is either very poor in the sense that a much higher lower bound exists, or else if F(x) is not poor in this sense, then the number of steps required to compute F(x) is greater than the number of steps required to compute f(x) by some reasonably good program for f. Intuitively, this means that a good lower bound F(x) on the number of steps to compute f(x) may be useless because it takes longer to compute F(x) than to compute f(x). The proof technique used is significant in its own right. It combines techniques used in the proof of two of the most important results in the theory of computational complexity, the gap theorem (Borodin, 1972) and the operator speed-up theorem (Meyer and Fisher, 1968).