Abstract

During incremental concept learning from examples, tentative hypotheses are formed and then modified to form new hypotheses. When there is a choice among hypotheses, bias is used to express a preference. Bias may be expressed by the choice of hypothesis language, it may be implemented as an evaluation function for selecting among hypotheses already generated, or it may consist of screening potential hypotheses prior to hypothesis generation. This paper describes the use of the third method. Bias is represented explicitly both as assumptions that reduce the space of potential hypotheses and as procedures for testing these assumptions. There are advantages gained by using explicit assumptions. One advantage is that the assumptions are meta‐level hypotheses that are used to generate future, as well as to select between current, inductive hypotheses. By testing these meta‐level hypotheses, a system gains the power to anticipate the form of future hypotheses. Furthermore, rigorous testing of these meta‐level hypotheses before using them to generate inductive hypotheses avoids consistency checks of the inductive hypotheses. A second advantage of using explicit assumptions is that bias can be tested using a variety of learning methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call