Abstract

As a data-driven design method, model-free optimal control based on reinforcement learning provides an effective way to find optimal control strategies. The design of model-free optimal control is sensitive to system data because it relies on data rather than detailed dynamic models. A prerequisite for generating applicable data is that the system must be open-loop stable (with a stable equilibrium point), which restricts the data-based control design methods in actual control problems and leads to rare experimental studies or verification in the literature. To improve this situation and enrich its applications, we propose a pre-stabilized mechanism and apply it to the motion control of a mechanical system together with a reinforcement learning-based model-free optimal control method, which constitutes a so-called hierarchical control structure. We design two real-time control experiments on an underactuated system to verify its effectiveness. The control results show that the proposed hierarchical control is quite promising in controlling this mechanical system, even though it is open-loop unstable with unknown dynamics.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call