Abstract

This paper analyzes the rudimental principle and cyber-realization of LMBP(Levenberg Marquardt Back Propagation) algorithm and finds out the main factors which restrict the training speed of this algorithm. One method of quickening the training speed is proposed and applied into the basic LMBP algorithm. When calculating the increment of weights and biases, the calculating speed is three times of that of the basic LMBP algorithm. At last, this paper applies this ameliorated LMBP algorithm into the training simulation of fault diagnosis based on some device's gearbox. The result indicates that the total training speed of single-hidden layer BP neural network based on the improved LMBP algorithm is approximately three times of that of the basic LMBP algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call