Abstract

This chapter discusses two methods that adaptively enlarge the set of primitive attributes from which a test at a decision node can be selected, namely, FRINGE and GREEDY3. A commonly studied class of target concepts in empirical learning is the class of concepts that have a small disjunctive normal form representation. Concepts with a small DNF description do not always have a concise decision tree representation when the tests at the decision nodes are limited to the primitive attributes. The representational complexity arises because the tests to verify if an instance satisfies a term or not are replicated in the tree. To learn decision trees from examples, traditional methods choose the test to place at each decision node from a predefined set of variables. It is in this regard, where FRINGE and GREEDY3 gain important use. FRINGE builds a decision tree using the primitive attributes, and analyses this tree to define new useful attributes. GREEDY3 builds a restricted type of decision tree called a decision list, and each attribute at a decision node is defined by a greedy top-down sample refinement method. The performance of a learning algorithm may be measured by the classification accuracy on unseen instances of the target concept. Also, parity and majority concepts with a large number of inputs are hard for decision trees.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.