Abstract
Two algorithms which learn decision trees from examples and their EBL (explanation-based learning) generated rules are presented. The first, IDG-1, learns correct but incomplete trees. It transforms — guided by examples — a rule set into a decision tree which is tailored to efficient execution. Tests done in an example domain show that these trees can be executed much faster than the corresponding EBL generated rule sets even if various methods to optimize rule execution have been applied. Consequently, IDG-1 is one method to ease the utility problem of EBL. The second algorithm, IDG-2, induces complete but no longer entirely correct trees. When compared with trees learned by ID3, the trees induced by IDG-2 showed significantly lower error rates. Since both algorithms construct a tree in a very similar way this demonstrates that the conditions derived from examples and a domain theory via EBL are better suited for tree induction than the simple conditions ID3 constructs from the example descriptions.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have