Abstract

A computational theory of induction must be able to identify the projectible predicates, that is to distinguish between which predicates can be used in inductive inferences and which cannot. The problems of projectibility are introduced by reviewing some of the stumbling blocks for the theory of induction that was developed by the logical empiricists. My diagnosis of these problems is that the traditional theory of induction, which started from a given (observational) language in relation to which all inductive rules are formulated, does not go deep enough in representing the kind of information used in inductive inferences.As an interlude, I argue that the problem of induction, like so many other problems within AI, is a problem of knowledge representation. To the extent that AI-systems are based on linguistic representations of knowledge, these systems will face basically the same problems as did the logical empiricists over induction.In a more constructive mode, I then outline a non-linguistic knowledge representation based on conceptual spaces. The fundamental units of these spaces are “quality dimensions”. In relation to such a representation it is possible to define “natural” properties which can be used for inductive projections. I argue that this approach evades most of the traditional problems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call