This paper presents a methodology for integrating Deep Reinforcement Learning (DRL) using a Deep-Q-Network (DQN) agent into real-time experiments to achieve the Global Maximum Power Point (GMPP) of Photovoltaic (PV) systems under various environmental conditions. Conventional methods, such as the Perturb and Observe (P&O) algorithm, often become stuck at the Local Maximum Power Point (LMPP) and fail to reach the GMPP under Partial Shading Conditions (PSC). The main contribution of this work is the experimental validation of the DQN agent's implementation in a synchronous DC-DC Buck converter (step-down converter) un-der both uniform and PSC conditions. Additionally, we establish a testing pipeline for DRL models. The DQN agent's performance is evaluated alongside the P&O algorithm. Results consistently indicate the DQN agent's superiority over the P&O algorithm in all simulated scenarios. Although this trend does not entirely replicate in real-world test setups, significant results are observed. Specifically, in PSC sce-narios where the P&O algorithm becomes trapped at an LMPP, the DQN algorithm extracts up to 63.5 % more power than the P&O algorithm. An open repository is available, containing PCB schematic designs and layouts, along with the code used for model training and deployment.
Read full abstract