Abstract

This paper discusses an approach for fast learning in big data. The proposed approach combines momentum factor and training rate, where the momentum is a dynamic function of the training rate in order to avoid overshoot weight to speed up training time of the back propagation neural network engine. The two factors are adjusted dinamically to assure the fast convergence of the training process. Experiments on 2-bit XOR parity problem were conducted using Matlab and a sigmoid function. Experiments results show that the proposed approach signifcantly performs better compare to the standard back propagation neural network in terms of training time. Both, the maximum training time and the minimum training time are significantly faster than the standard algorithm at error threshold of 10 -5 .

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call