Abstract

In order to solve problems more effectively with accumulating experience, a system must be able to learn and exploit search control knowledge. While previous research has demonstrated that explanation-based learning is a viable method for acquiring search control knowledge, in practice explanation-based techniques may not generate effective control knowledge. For control knowledge to be effective, the cumulative benefits of applying the knowledge must outweigh the cumulative costs of testing to see whether the knowledge is applicable. To produce effective control knowledge, an explanation-based learner must generate explanations that capture the key features relevant to control decisions, and represent this information so that it can be easily taken advantage of. This paper describes three mechanisms incorporated in the PRODIGY system for attacking this problem. First, PRODIGY is selective about what it learns from a particular example. Secondly, after generating an initial explanation, the system attempts to re-represent the explanation to reduce the cost of testing whether it is applicable. Finally, PRODIGY empirically evaluates the utility of the rules it learns.11This research was sponsored in part by the Defense Advanced Research Projects Agency (DOD), ARPA order No. 4976, monitored by the Air Force Avionics Laboratory under contract F33615-84-K-l520, in part by the Office of Naval Research under contract N00014-84-K-0345, in part by a gift from the Hughes Corporation, and in part by a Bell Laboratories Scholarship supporting the primary author. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of DARPA, AFOSR, ONR, or the US government.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call