Abstract

Integrating memory into evolutionary algorithms is one major approach to enhance their performance in dynamic environments. An abstract memory scheme has been recently developed for evolutionary algorithms in dynamic environments, where the abstraction of good solutions is stored in the memory instead of good solutions themselves to improve future problem solving. This paper further investigates this abstract memory with a focus on understanding the relationship between learning and memory, which is an important but poorly studied issue for evolutionary algorithms in dynamic environments. The experimental study shows that the abstract memory scheme enables learning processes and hence efficiently improves the performance of evolutionary algorithms in dynamic environments.

Highlights

  • A main concern of evolutionary algorithms (EAs) for solving dynamic optimization problems (DOPs) is to maintain the genetic diversity of the population [6,10, 16] in order to guarantee continuing and sustainable both of the above concepts have shown to be successful for certain dynamic environments, there are some points of criticism

  • A related example is an analysis of how self–adaptive mutation steps reflect the movement of the optima [3], which can be regarded as an implicit learning process

  • This paper investigates an abstract memory scheme for EAs in dynamic environments, where memory is used to store the abstraction of good solutions

Read more

Summary

Introduction

A main concern of evolutionary algorithms (EAs) for solving dynamic optimization problems (DOPs) is to maintain the genetic diversity of the population [6,10, 16] in order to guarantee continuing and sustainable. We show that such a memory scheme enables learning processes conceptionally and functionally similar to those considered in machine learning It explicitly uses the past and present solutions in an abstraction process that is employed to improve future problem solving and differentiates between different kinds of dynamics of the fitness landscape. A computer science example that is related to this study and applies these principles is the memory design of autonomous agents in artificial life for dynamic environments [9] In machine learning, this matter can be formalized further by defining the learning problem as finding a mapping between inputs and outputs [14]. The filling of the abstract memory takes place during the run– time of the EA in the dynamic fitness function It gradually builds a mapping between the search space elements and good solutions. We bring together learning and memory for evolutionary optimization in dynamic environments

The Abstract
21: Randomly fix the exact position of each individual within each partition cell
Abstract Memory Storage
Abstract Memory Dynamics
Experimental setup and performance measurement
Properties of the abstract memory
Learning behavior
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call