Abstract

In order to improve the working stability of brushless direct current motors (BLDCM), a diagonal recursive neural network (DRNN) control strategy based on Q-learning algorithm is proposed in this paper which is called as Q-DRNN. In Q-DRNN, DRNN iterates over the output variables through a unique recursive loop in the hidden layer, and its key weight is optimized to speed up the iteration. Moreover, an improved Q-learning algorithm is introduced to modify the weight momentum factor of DRNN, which makes DRNN have the ability of learning and online correction so as to make the BLDCM achieve better control effect. In MATLAB/Simulink environment, Q-DRNN is tested and compared with other popular control methods in terms of speed and torque response under different operating conditions, and the results show that Q-DRNN has better adaptive and anti-interference ability as well as stronger robustness.

Highlights

  • Due to its simple structure, high efficiency, long service life and low noise, brushless direct current motors (BLDCM) has been widely used in national defense, aerospace, robotics, and so on.[1,2,3,4,5] BLDCM plays an important role in the modern motor control system

  • Combined with the strong search ability of Q-learning and the advantages of diagonal recursive neural network (DRNN) such as recursive loop structure, dynamic mapping ability and adaptability to time-varying, this paper presents a control strategy Q-DRNN to improve the performance of BLDCM

  • In order to verify the effectiveness of Q-DRNN, its performance is tested and compared with neural network PID (NNPID) control method,[23] Online fuzzy supervisory learning method based on RBFNN (OnlineRBFNN),[24] antlion algorithm optimized fuzzy PID supervised online recurrent fuzzy neural network (ALORFNN) based control method[25] and Q-learning optimized regression neural network (QLRNN) control method[31] under different operating conditions

Read more

Summary

Introduction

Due to its simple structure, high efficiency, long service life and low noise, BLDCM has been widely used in national defense, aerospace, robotics, and so on.[1,2,3,4,5] BLDCM plays an important role in the modern motor control system. In Premkumar et al.,[25] aiming at the speed control problem of BLDCM, antlion algorithm optimized fuzzy PID supervised online recurrent fuzzy neural network based controller is proposed. The learning parameters of supervised online recurrent fuzzy neural network controller are optimized by using antlion algorithm. GA-PSO algorithm is used to optimize the learning rate, forgetting factor and the maximum decreasing momentum constant of the online ANFIS controller under different torque conditions of BLDCM, and the effectiveness of the method has been verified by simulation experiments. In order to verify the effectiveness of Q-DRNN, its performance is tested and compared with neural network PID (NNPID) control method,[23] Online fuzzy supervisory learning method based on RBFNN (OnlineRBFNN),[24] antlion algorithm optimized fuzzy PID supervised online recurrent fuzzy neural network (ALORFNN) based control method[25] and Q-learning optimized regression neural network (QLRNN) control method[31] under different operating conditions. The differential equation model of the two-stage three-phase BLDCM is established

Voltage equation
Equation of motion
Equation of state
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call