Abstract

Recent application studies of deep reinforcement learning (DRL) in power electronic systems have successfully demonstrated its superiority over conventional model-based control design methods, stemming from its adaption and self-optimisation capabilities. However, the inevitable gap between offline training and real-life application presents a significant challenge for practical implementation, owing to its insufficient robustness. With this in mind, this paper proposes a novel robust DRL controller by fusing an extended state observer (ESO) for the DC–DC buck converter system feeding constant power loads (CPLs). To be specific, the mismatched lumped terms are reconstructed by an ESO in real time, and then fed forward into the agent's action, aiming to improve the adaptability to parameter variations of the real-life converter systems. By carefully conducting simulation and experimental tests, the robustness enhancement ability of the proposed framework compared with model-free DRL and conventional PI controllers are clearly verified.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call