Abstract

The theory of evolutionary computation (EC) has experienced rapid and productive growth in recent years. New proof techniques and novel theoretical frameworks have allowed advances in our understanding of the processes and structures inherent in evolutionary optimization. As a result, the frontiers of our knowledge have been expanded further than ever before. Some recent trends in this field, which are covered in this issue, include developments in the understanding of the behavior of evolutionary algorithms (EAs) in dynamic environments rather than just static settings, a theoretical appreciation of the advantages arising from the parallelization of evolutionary algorithms through a greater comprehension of the underlying dynamics, and an understanding of algorithm behavior on broad function classes, including NP-hard problems. The primary goal of this special issue is to provide extended and polished versions of diverse examples of the best theoretical work presented at conferences in 2014, and to serve as a forum for researchers to advance the theoretical understanding of evolutionary computation methods. The papers included in this special issue span a plurality of topics and offer the reader a cross section of recent outstanding work in EC theory. In dynamic optimization the objective function changes over time, and optimization algorithms face the additional challenge of tracking these changes to be successful. The article “Analysis of Randomised Search Heuristics for Dynamic Optimisation,” by Thomas Jansen and Christine Zarges, presents a novel analytical framework for the analysis of randomized search heuristics on dynamic problems inspired by the fixed-budget computations perspective. The authors introduce a new interesting class of bi-stable dynamic functions where the optimum oscillates between two complementary strings, and apply the framework to analyze and compare the performance of evolutionary algorithms and artificial immune systems on the novel class of functions. Over three decades ago, Laszlo Lovasz observed that in discrete optimization, submodularity is the counterpart to convexity. However, in contrast to the focus on convex functions in continuous evolutionary optimization, so far submodular functions have received comparatively little attention from EC theoreticians studying discrete functions. The article “Maximizing Submodular Functions under Matroid Constraints by Evolutionary Algorithms,” by Tobias Friedrich and Frank Neumann, addresses this gap by analyzing the performance of evolutionary algorithms on different classes of

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.