Abstract

Heuristic stochastic optimization techniques such as genetic algorithm perform global search, but they suffer from the problem of slow convergence rate near global optimum. On the other hand deterministic techniques such as gradient descent exhibit a fast convergence rate around global optimum but may get stuck in a local optimum. Motivated by these problems, a hybrid learning algorithm combining genetic algorithm (GA) with gradient descent (GD), called HGAGD, is proposed in this paper. The new algorithm combines the global exploration ability of GA with the accurate local exploitation ability of GD to achieve a faster convergence and also a better accuracy of final solution. The HGAGD is then used to train a qubit neural network (QNN), which is a good candidate for enhancing the computational efficiency of conventional neural networks, for two different applications. Firstly, a benchmark function is chosen to illustrate the potential of the proposed approach in dealing with function approximation problem. Subsequently, the feasibility of the proposed method in designing an indirect adaptive controller for damping of low frequency oscillations in power systems is studied. The results of these studies show that the proposed controller trained by the HGAGD can achieve satisfactory control performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call