Abstract

This work gives improvement of gradient learning algorithms for adjusting neural network weights. Suggested improvement results in alternative method that converge in less iteration and is inherently parallel, convenient for implementation on computer grid. Experimental results show time savings in multiple thread execution for a wide range of MLP neural network parameters, such as size of input/output data matrix, number of neurons and layers.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call