Abstract

For all supervisedVanneschi, Leonardo learning problems, where the quality of solutions is measured by a distance between target and output values (error), geometric semantic operators of genetic programming induce an error surface characterized by the absence of locally suboptimal solutions (unimodal error surface). So, genetic programming that uses geometric semantic operators, called geometric semantic genetic programming, has a potential advantage in terms of evolvability compared to many existing computational methods. This fosters geometric semantic genetic programming as a possible new state-of-the-art machine learning methodology. Nevertheless, research in geometric semantic genetic programming is still much in demand. This chapter is oriented to researchers and students that are not familiar with geometric semantic genetic programming, and are willing to contribute to this exciting and promising field. The main objective of this chapter is explaining why the error surface induced by geometric semantic operators is unimodal, and why this fact is important. Furthermore, the chapter stimulates the reader by showing some promising applicative results that have been obtained so far. The reader will also discover that some properties of geometric semantic operators may help limiting overfitting, bestowing on genetic programming a very interesting generalization ability. Finally, the chapter suggests further reading and discusses open issues of geometric semantic genetic programming.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call