Abstract

Monotonic and non-monotonic reasoning is introduced into inductive inference. In inductive inference, which is a mathematical theory of algorithmic learning from possibly incomplete information, monotonicity means to construct hypotheses somehow incrementally. whereas the necessity of non-monotonic reasoning indicates that during hypothesis formation considerable belief revisions may be required. Therefore, it is of a particular interest to find areas of inductive inference where monotonic construction of hypotheses is always possible. It turned out that in the area of inductive inference of total recursive functions monotonicity can rarely be guaranteed. These results are compared to the problem of inductively inferring text patterns from finite samples. For this area, there is a universal weakly monotonic inductive inference algorithm. The computability of a stronger algorithm which is developed depends on the decidability of the inclusion problem for pattern languages. This problems remains open. Unfortunately, the latter algorithm turns out to be inconsistent, i.e. it sometimes generates hypotheses not able to reflect the information they are build upon. Consistency and monotonicity can hardly be achieved simultaneously. It arises the question under which circumstances an inductive inference algorithm for learning text patterns can be both consistent and monotonic. This problem class is characterized by closedness under intersection.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call