Abstract

In recent years, artificial neural networks with back-propagation training have been widely used in chemical and petroleum engineering. In this article, particle swarm optimization was combined with a back propagation algorithm to form a new learning algorithm for training artificial neural networks. The particle swarm optimization algorithm was shown to converge rapidly during the initial stages of a global search, but around global optimum, the search process will become very slow. On the contrary, the gradient descending method can achieve a faster convergent speed around global optimum, and at the same time, the convergent accuracy can be higher. The proposed algorithm combines the local searching ability of the gradient-based back-propagation strategy with the global searching ability of particle swarm optimization. Particle swarm optimization is used to decide the initial weights of the gradient decent methods. This strategy is applied to model a highly nonlinear system of yeast fermentation bioreactor. Based on the results comparison, the particle swarm optimization-back propagation model is found to be superior to the back propagation-artificial neural network model in identification of nonlinear systems.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call