Abstract

In certain optimization problems, aside from the target objective, auxiliary objectives can be used. These auxiliary objectives may be either helpful or not. Often we can not determine whether an auxiliary objective is helpful. In this work we consider the EA+RL method that dynamically chooses auxiliary objectives in random local search using reinforcement learning. This method's runtime has already been theoretically analysed on different monotonic functions, and it was shown that EA+RL can exclude harmful auxiliary objectives from consideration. EA+RL has also shown good results on different real-world problems. However, it has not been theoretically analysed whether this method can efficiently optimize non-monotonic functions using simple evolutionary algorithms and reinforcement learning agents. In this paper we consider optimization of the non-monotonic JUMP function with the EA+RL method. We use two auxiliary objectives. One of them is helpful during the first phase of optimization and another one is helpful during the last phase. On other stages they are constant, so they neither help nor slow optimization down. We show that EA+RL has at least Ω(l/n) probability of solving this problem in polynomial time using random local search, which is impossible for the conventional random local search without learning. We also propose a modification of EA+RL that is guaranteed to find the optimum.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call