Abstract

Abstract Heuristic and deterministic optimization methods are extensively applied for the training of artificial neural networks. Both of these methods have their own advantages and disadvantages. Heuristic stochastic optimization methods like genetic algorithm perform global search, but they suffer from the problem of slow convergence rate near global optimum. On the other hand deterministic methods like gradient descent exhibit a fast convergence rate around global optimum but may get stuck in a local optimum. Motivated by these problems, a hybrid learning algorithm combining genetic algorithm (GA) with gradient descent (GD), called HGAGD, is proposed in this paper. The new algorithm combines the global exploration ability of GA with the accurate local exploitation ability of GD to achieve a faster convergence and also a better accuracy of final solution. The HGAGD is then employed as a new training method to optimize the parameters of a quantum-inspired neural network (QINN) for two different applications. Firstly, two benchmark functions are chosen to demonstrate the potential of the proposed QINN with the HGAGD algorithm in dealing with function approximation problems. Next, the performance of the proposed method in forecasting Mackey–Glass time series and Lorenz attractor is studied. The results of these studies show the superiority of the introduced approach over other published approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call