Abstract

As today’s one of the hottest topics, machine learning brings about opportunities in various research areas. Moreover, computational intelligence and metaheuristics open up new strategies, which are shown to be efficient in solving optimization problems. However, studies bringing such remarkable approaches together are still lacking. In this context, the present paper introduces a Q-learning reinforcement learning strategy for binary optimization problems. The developed algorithm works as a reinforcement and recommendation system that evaluates the used algorithms, assigns rewards, promotes or demotes them. Thus, it invokes more promising optimizers more frequently. The proposed Q-learning algorithm uses Particle Swarm Optimization (PSO), Genetic Algorithm and a hybrid of these algorithms, namely, genetic-based PSO (gbPSO) as optimizers. Therefore, it is aimed to avoid local optima by using various optimizers and gathering additional statistical data. Secondarily, all optimizers are further enhanced by adopting an initial solution generation technique and triggered random immigrants mechanism to preserve swarm diversity. In addition to these procedures, a mutation procedure that decreases the diversity is adopted. Thus, more intensified search is encouraged towards the end of search. Moreover, while PSO requires for transfer functions in order to perform in binary spaces, the adopted and further improved gbPSO does not necessarily need such auxiliary procedures. Finally, the performances of all used algorithms are analysed on a recently caught on binary problem, namely, the set-union knapsack problem, which has a wide range of real-life applications. As demonstrated by the comprehensive experimental study and appropriate statistical tests, promising improvements are achieved.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.