Abstract

ABSTRACT This research provides a method that accelerates learning and avoids local minima to improve the policy gradient algorithm’s learning process. Reinforcement learning has the advantage of not requiring a model. Consequently, it can improve control performance, mainly when a model is generally unavailable, such as when an error occurs. The proposed method efficiently and expeditiously investigates the action space. First, it quantifies the resemblance between agents’ and traditional controllers’ actions. Then, the principal reward function is modified to reflect this similarity. This reward-shaping mechanism guides the agent to maximize its return via an attractive force during the gradient ascent. To validate our concept, we establish a satellite attitude control environment with a similarity subsystem. The outcomes demonstrate the effectiveness and robustness of our method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call