Abstract

Most studies modeling inaccurate data in Gold style learning consider cases in which the number of inaccuracies is finite. The present paper argues that this approach is not reasonable for modelling inaccuracies in concepts that are infinite in nature (for example, graphs of computable functions). The effect of an infinite number of inaccuracies in the input data in Gold's model of learning is considered in the context of identification in the limit of computer programs from graphs of computable functions. Three kinds of inaccuracies, namely, noisy data, incomplete data, and imperfect data, are considered. The amount of each of these inaccuracies in the input is measured using certain density notions. A number of interesting hierarchy results are shown based on the densities of inaccuracies present in the input data. Several results establishing trade-offs between the density and type of inaccuracies are also derived.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call