Abstract

Proton exchange membrane fuel cells (PEMFCs) constitute nonlinear systems that are challenging to model accurately. Therefore, a controller with robustness and adaptability is imperative for temperature control within the PEMFC stack. This paper introduces a data-driven controller utilizing deep reinforcement learning for stack temperature control. Given the PEMFC system’s characteristics, such as nonlinearity, uncertainty, and environmental conditions, we propose a novel deep reinforcement learning algorithm—the deep deterministic policy gradient with priority experience playback and importance sampling method (PEI-DDPG). Algorithm design incorporates technologies such as priority experience playback, importance sampling, and optimized sample data storage structure, enhancing the controller’s performance. Simulation results demonstrate the proposed algorithm’s superior effectiveness in temperature control for PEMFC, leveraging the PEI-DDPG algorithm’s high adaptability and robustness. The proposed algorithm’s effectiveness is additionally validated on the RT-LAB experimental platform. The proposed PEI-DDPG algorithm reduces the average adjustment time by 8.3%, 17.13%, and 24.56% and overshoots by 2.12 times, 4.16 times, and 4.32 times compared to the TD3, GA-PID, and PID algorithms, respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.