Abstract

Control realization for power electronics systems via deep reinforcement learning (DRL) has evidently shown its superiority over conventional model-based control design methods in recent years, resulting from its adaptive and self-optimization capabilities. However, the inevitable gap between offline-trained agent and the real-life system becomes a key issue hindering its practical implementations. In order to enhance the robustness of DRL controller for DC-DC buck converter systems feeding constant power loads (CPLs), this paper proposes a novel composite DRL controller by fusing an extended state observer (ESO). Such handling approach is able to compensate the mismatched lumped terms between offline-trained agent and real-life platform, hence effectively improving the steady-state performance in practical implementations, while an optimized transient-time control performance of the system is achieved. The feasibility of the method is verified by Matlab/simulation platform between PI controller and the proposed algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call