Abstract

We start by describing and characterising the search space explored by genetic programming (GP). We show how to compute the size of the search space. Then, we introduce some work on the distribution of functionality of the programs in the search space and indicate its scientific and practical consequences. In particular, we explain why GP can work despite the immensity of the search space it explores. Then, we show recent theory on halting probability that extends these results to forms of Turing complete GP. This indicates that Turing complete GP has a hard time solving problems unless certain measures are put in place. Having characterised the search space, in the second part of the tutorial, we characterise GP as a search algorithm by using the schema theory. In the tutorial we introduce the basics of schema theory, explaining how one can derive an exact probabilistic description of GAs and GP based on schemata. We illustrate the lessons that can be learnt from the theory and indicate some recipes to do GP well for practitioners. These include important new results on bloat in GP and ways to cure it. Despite its technical contents, an big effort has been made to limit the use of mathematical formulae to a minimum.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.