Abstract

It is shown that allowing a bounded number of anomalies (mistakes) in the final programs learned by an algorithmic procedure can considerably “succinctify” those final programs. Naturally, only those contexts are investigated in which the presence of anomalies is not actually required for successful inference (learning). The contexts considered are certain infinite subclasses of the class of characteristic functions of finite sets. For each finite set D, these subclasses have a finite set containing D. This latter prevents the anomalies from wiping out all the information in the sets featured in these subclasses and shows the context to be fairly robust. Some of the results in the present paper are shown to be provably more constructive than others. The results of this paper can also be interpreted as facts about succinctness of coding finite sets, which facts have interesting consequences for learnability of decision procedures for finite sets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call