Abstract

This paper proposed a data-driven adaptive optimal control approach for CVCF (constant voltage, constant frequency) inverter based on reinforcement learning and adaptive dynamic programming (ADP). Different from existing literature, the load is treated as a dynamic uncertainty and a robust optimal state-feedback controller is proposed. The stability of the inverter-load system has been strictly analyzed. In order to obtain accurate output current differential signal, this paper designs a tracking differentiator. It is ensured that the tracking error asymptotically converges to zero through the proposed output-feedback controllers. A standard proportional integral controller and linear active disturbance rejection control strategy are also designed for the purpose of comparison. The simulation results show that the proposed controller has inherent robustness and does not require retuning with different applications.

Highlights

  • W ITH the reduction of fossil energy reserves, the increasing environmental pollution has aroused people’s attention to new energy [1]

  • CVCF inverter are widely used in the industry, such as distributed power generation reactive power compensator, electric aircraft power system and uninterruptible power supplies [5], [6]

  • COMPARE CONTROLLER In order to verify the superiority of the control algorithm proposed in this paper, this paper designs standard proportional integral (PI) control and linear active disturbance rejection control (LADRC)

Read more

Summary

INTRODUCTION

W ITH the reduction of fossil energy reserves, the increasing environmental pollution has aroused people’s attention to new energy [1]. The CVCF inverter should be able to adapt to various loads (e.g., load step-change, unbalanced load, and nonlinear load) [13] In this case, the controller should be carefully designed because the kinetic parameters of the LC circuit have peaks [11]. In the literature [38], a voltage source inverter control method based on the principle of internal model is proposed. In order to solve the problem, we invoke reinforcement learning theory [27] and ADP for non-model-based, data-driven adaptive optimal control design. Sometimes we are more interested in obtaining approximate optimal solutions by making full use of some limited data This motivates us to develop a off-policy learning strategy in which we apply the initial control strategy to the system within a limited time interval and collect online measurement results. For a symmetric matrix P ∈ Rm×m, an asymmetric matrix Y ∈ Rn×m and a column vector v ∈

MODELING OF SYSTEM
L r U IL
Update the feedback gain matrix by
Policy evaluation and improvement:
INITIAL CONTROL GAIN AND STABILITY ANALYSIS
COMPARE CONTROLLER
SIMULATION RESULTS
CONCLUSIONS
C PI ADRC ADP PI
VIII. ACKNOWLEDGMENT
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.