Abstract

Overhead cranes, as an important tool for loading and transporting, play an important role in modern industry. A key challenge in overhead crane control is payload mass variation: a policy learned to solve the overhead crane control in the fixed payload scenario often fails to solve the control task in the payload variation scenario. Therefore, from a practical perspective, this paper designs a novel deep reinforcement learning (DRL) control algorithm, domain randomization memory-augmented Beta proximal policy optimization (DR-MABPPO), which leverages the memory-augmented policy and incorporates the domain randomization (DR) training strategy to address the control problem of the overhead crane with payload masses variations. With the help of the DR training strategy and the memory-augmented policy, DR-MABPPO can learn a universal policy that is robust to the wide range of payload mass variations. As far as we know, this is the first time that the DRL technique is applied to solve the overhead crane control with payload mass variations. Simulation studies are conducted to demonstrate the effectiveness of the proposed method in the presence of payload mass variations, exhibiting satisfactory control performance when compared to PID and LQR.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call