Abstract

Previous bias shift approaches to predicate invention are not applicable to learning from positive examples only, if a complete hypothesis can be found in the given language, as negative examples are required to determine whether new predicates should be invented or not. One approach to this problem is presented, MERLIN 2.0, which is a successor of a system in which predicate invention is guided by sequences of input clauses in SLD-refutations of positive and negative examples w.r.t. an overly general theory. In contrast to its predecessor which searches for the minimal finite-state automaton that can generate all positive and no negative sequences, MERLIN 2.0 uses a technique for inducing Hidden Markov Models from positive sequences only. This enables the system to invent new predicates without being triggered by negative examples. Another advantage of using this induction technique is that it allows for incremental learning. Experimental results are presented comparing MERLIN 2.0 with the positive only learning framework of Progol 4.2 and comparing the original induction technique with a new version that produces deterministic Hidden Markov Models. The results show that predicate invention may indeed be both necessary and possible when learning from positive examples only as well as it can be beneficial to keep the induced model deterministic.KeywordsHide Markov ModelLogic ProgramPositive SequenceInductive Logic ProgrammingNegative SequenceThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call