Abstract

This work uses reinforcement learning (RL) to achieve the first-ever data-driven real-time control of an actual, not simulated, triple inverted pendulum (TIP) in a model-free way. A swing-up control task for the TIP is formulated as a Markov decision process with a dense reward function, then conducted in real time by using a model-free RL approach. To increase the sample efficiency of learning, a structure-aware virtual experience replay (VER) method is proposed; it works together with an off-policy actor-critic algorithm. The VER exploits the geometrically-symmetric property of TIPs to create virtual sample trajectories from measured ones, then uses the resulting multifold augmented dataset to effectively train actor and critic networks during the learning process. These structure-infused training data serve to obtain additional information and hence increase the convergence speed of network learning. We combine the proposed VER with a state-of-the-art actor-critic algorithm, and then validate its effectiveness through numerical simulations. Notably, the inclusion of VER amplifies computational efficiency, slashing the requisite trials, training steps, and overall duration by approximately 66.67%. Finally experiments demonstrate the real-time control capability of the proposed approach on an actual TIP system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call