Abstract
As a genetics-based machine learning technique, zeroth-level classifier system (ZCS) based on average reward reinforcement learning (ZCSAR) evolves solutions to optimize average reward per time step. However, initial experimental results have shown that, in some cases, the performance of ZCSAR oscillates heavily during the learning period, or cannot reach the optimum during the testing period. In this paper, we modify the selection strategies in ZCSAR to improve its performance, under conditions of minimal changes of ZCSAR. The proposed selection strategies take tournament selection method to choose parents in Genetic Algorithm (GA), and take roulette wheel selection method to choose actions in match set and to choose classifiers for deletion in both GA and covering. Experimental results show that ZCSAR with the new selection strategies can evolve more promising solutions with enough parameter independence, and also with slighter oscillation during the learning period.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Journal of Ambient Intelligence and Humanized Computing
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.