Abstract

Publisher Summary This chapter discusses learning approaches to prune the successor set(s). It studies the exclusion of forbidden states or move sequences and localizing the search using the notion of relevance. The chapter distinguishes between on-the-fly and offline learning. One of the most effective approaches to tackle large problem spaces is to prune (i.e., cutoff branches from) the search tree. There are multiple reasons for pruning. Some branches might not lead to a goal state, others lead to inferior solutions; some result in positions already reached on different paths, and others are redundant; though these might lead to a solution, there are still alternatives that also lead to a solution. All state space pruning techniques reduce the node-branching factor of the search tree such that fewer successor nodes have to be analyzed. Since a smaller part of the state space is generated, pruning saves both search time and space. However, there might be a trade-off between the two. Some techniques require rather complex data structures, such that the maintenance of pruning information may be involved. Static pruning techniques detect pruning knowledge prior to the main search routine. Other pruning rules may not be known to the search algorithm at its start and have to be inferred during the execution of the program. This leads to layered search algorithms. In the top-level search, the search algorithms search for problem solutions, and in frequently invoked lower-level searches, pruning knowledge is refined.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call