Abstract

This paper presents a new Reinforcement Learning (RL)-based control approach that uses the Policy Iteration (PI) and a metaheuristic Grey Wolf Optimizer (GWO) algorithm to train the Neural Networks (NNs). Due to an efficient tradeoff to exploration and exploitation, the GWO algorithm shows good results in NN training and solving complex optimization problems. The proposed approach is compared to the classical PI RL-based control approach using the Gradient Descent (GD) algorithm, and with the RL-based control approach which uses the metaheuristic Particle Swarm Optimization (PSO) algorithm. The experiments are conducted using a nonlinear servo system laboratory equipment. Each approach evaluated on how well it solves the optimal reference tracking control for an experimental servo system position control system. The policy NNs specific to all three approaches are implemented as state feedback with integrator controllers to remove the steady-state control errors and thus ensure the convergence of the objective function. Because of the random nature of metaheuristic algorithms, the experiments for GWO and PSO algorithms are run multiple times and the results are averaged before the conclusions are presented. The experimental results shows that for the control objective considered in this paper, the GWO algorithm represents a better solution compared to GD and PSO algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call