Abstract
This paper presents a new algorithm, Function Optimisation by Reinforcement Learning (FORL), to solve large-scale and complex function optimisation problems. FORL undertakes the dimensional search in sequence, in contrast to evolutionary algorithms (EAs) which are based on the population-based search, and has the ability of memory of history incorporated via estimating and updating of the values of states that have been visited, which is different from EAs that aggregate the individuals of a population towards the best selected in a current population. With its capability of searching in sequence and memory of history, FORL reduces the number of function evaluations (FEs). FORL has been evaluated, in comparison with several EAs, including recently improved Evolutionary Programming, Genetic Algorithms, Particle Swarm Optimisation and other efficient EAs, on 23 benchmark functions, which represent a range of most challenging optimisation problems. The simulation studies show that FORL, using a smaller number of FEs, offers better performance in finding accurate solutions, in particular for high-dimensional multimodal function optimisation problems.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have