Abstract

This paper provides a beginning study of the effects on inductive inference of paradigm shifts whose absence is approximately modeled by various formal approaches to forbidding large changes in the size of programs conjectured. One approach, called severely parsimonious , requires all the programs conjectured on the way to success to be nearly (i.e., within a recursive function of) minimal size. It is shown that this very conservative constraint allows learning infinite classes of functions, but not infinite r.e. classes of functions. Another approach, called non-revolutionary , requires all conjectures to be nearly the same size as one another. This quite conservative constraint is, nonetheless, shown to permit learning some infinite r.e. classes of functions. Allowing up to one extra bounded size mind change towards a final program learned certainly does not appear revolutionary. However, somewhat surprisingly for scientific (inductive) inference, it is shown that there are classes learnable with the non-revolutionary constraint (respectively, with severe parsimony), up to ( i +1) mind changes, and no anomalies, which classes can not be learned with no size constraint, an unbounded, finite number of anomalies in the final program, but with no more than i mind changes. Hence, in some cases, the possibility of one extra mind change is considerably more liberating than removal of very conservative size shift constraints. The proofs of these results are also combinatorially interesting.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call