Abstract

Photovoltaic (PV) power generation is considered to be a clean energy source. Solar modules suffer from nonlinear behavior, which makes the maximum power point tracking (MPPT) technique for efficient PV systems particularly important. Conventional MPPT techniques are easy to implement but require fine tuning of their fixed step size. Unlike conventional MPPT, the MPPT based on reinforcement learning (RL-MPPT) has the potential to self-learn to tune step size, which is more adaptable to changing environments. As one of the typical RL algorithms, the Q-learning algorithm can find the optimal control strategy through the learned experiences stored in a Q-table. Thus, as the cornerstone of this algorithm, the Q-table has a significant impact on control ability. In this paper, a novel Q-table of reinforcement learning is proposed to maximize tracking efficiency with improved Q-table update technology. The proposed method discards the traditional MPPT idea and makes full use of the inherent characteristics of the Q-learning algorithm such as its fast dynamic response and simple algorithm principle. By establishing six kinds of Q-tables based on the RL-MPPT method, the optimal discretized state of a photovoltaic system is found to make full use of the energy of the photovoltaic system and reduce power loss. Therefore, under the En50530 dynamic test standard, this work compares the simulation and experimental results and their tracking efficiency using six kinds of Q-table, individually.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call