Abstract

This paper addresses the problem of using explanation-based generalizations of negative inputs in Mitchell's candidate elimination algorithm and in other more recent approaches to version spaces. It points out that a mere combination would produce a worse result than that obtainable without input preprocessing, whereas this problem was not perceived in previous work. Costs and benefits of the extra computation required to take advantage of prior EBL phase are analysed for a conjunctive concept languages defined on a tree-structured attribute-based instance space. Furthermore, this result seems to be independent of the particular inductive learning algorithm considered (i.e., version spaces), thus helping clarify one aspect of the ill-understood relation between analytical generalization and empirical generalization.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call