Abstract

This study’s primary objective was to try to shorten the training time of the Reinforcement Learning (RL) method, which is one of the Machine Learning methods, by using the proportional-integral-derivative (PID) control method during training. In this study, a balancing robot with two wheels that can be controlled independently on the same axis is used. While the robot is in balance, the RL software block follows how the PID block maintains the balance, and the RL blog learned how to behave against disturbing factors without physical falling / rising. In the training of RL, it is necessary to create approximately 500 policy / reward / path equations between the current state and future state matrices. Obviously, the amount of equations will increase considerably when subjects such as old position and acceleration are added. Approximately 1000 trial / error is required for training purposes. This means many falling / rising cycles. With the method we present, the RL block has learned to keep the robot in balance without falling and requiring human intervention in 900 trials. This shortened the training time by about 60%.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call