Abstract

The effectiveness of Genetic Algorithms (GA) heavily depends on the appropriate setting of its parameters. Moreover, optimal values for these parameters depend on both the type of GA and the application problem pattern and must be developed for each particular setting one by one. Therefore it requires special expertise and many experiments to validate the parameter setting. In order to solve this problem, a new method called "adaptive parameter control" was proposed, which adaptively controls parameters of an evolutionary algorithm. However, since this method just increases the selection probability of a search operator that generated a well evaluated individual, this is apt to be a shortsighted optimization method. On the contrary, a method is proposed to realize longsighted optimal parameter control of GA using Reinforcement Learning (RL). However, this method does neither consider the calculation cost of search operators nor population search characteristics of GA. Here, we propose a refined RL method for parameter control, in which (1) the reward decision rules are elaborately incorporated under the consideration of GA's population search characteristics and (2) the calculation cost of the search operator is taken into account. It is expected that this method can efficiently learn parameters to optimally select search operators of GA for approximately solving Traveling Salesman Problems (TSPs).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.