Abstract

The Candidate Elimination (CE) algorithm, like most inductive learning algorithms, uses a fixed restricted concept language to focus the search on useful generalisations. The drawback of this approach is that it can easily give rise to inconsistency with data, while not ensuring tractability of the search involved. A more flexible way to trade off efficiency and consistency in concept learning is to work with a variable-sized concept language, starting with small sizes - more efficient and less consistent - and shifting to larger sizes - more consistent and less efficient - when necessary. In this paper we propose an algorithm, called Factored Candidate-Elimination (FCE) algorithm, for inducing version spaces over a set of variable-factored conjunctive concept languages. FCE employs the standard CE algorithm to do induction within each concept language, but it is then able to induce the new version spaces after any language shift without re-processing the instances already seen. We discuss the applications of this framework to improve consistency (when a set of inconsistent concept languages is given) and efficiency (when a consistent and factorable concept language is given). We evaluate the latter with respect to a tree-structured attribute-based conjunctive concept language, and we show when this approach leads to a reduction in complexity.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call