Abstract

This paper presents a new algorithm, Function Optimisation by Reinforcement Learning (FORL), to solve large-scale and complex function optimisation problems. FORL undertakes the dimensional search in sequence, in contrast to evolutionary algorithms (EAs) which are based on the population-based search, and has the ability of memory of history incorporated via estimating and updating of the values of states that have been visited, which is different from EAs that aggregate the individuals of a population towards the best selected in a current population. With its capability of searching in sequence and memory of history, FORL reduces the number of function evaluations (FEs). FORL has been evaluated, in comparison with several EAs, including recently improved Evolutionary Programming, Genetic Algorithms, Particle Swarm Optimisation and other efficient EAs, on 23 benchmark functions, which represent a range of most challenging optimisation problems. The simulation studies show that FORL, using a smaller number of FEs, offers better performance in finding accurate solutions, in particular for high-dimensional multimodal function optimisation problems.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.