Abstract

Strategically injected noise can speed up convergence during Neural Network training using backpropagation algorithm. Noise injection during Neural Network training have been proven empirically to improve convergence and generalizability. In this work, a new methodology proven to be efficient for speeding up learning convergence using weight noise in Single Layer Feed-forward Network (SLFN) architecture is presented. We present efficient and effective methods in which local minimum entrapment can be avoided. Our proposed controlled introduction of noise is based on 4 proven analytical and experimental methods. We show that criteria-based mini-batch noise injection to the weights during training often outperforms the noiseless weights as well as fixed noise introduction as seen in literature both in network generalization and convergence speed. The effectiveness of this methodology has been empirically shown as well as it achieving on an average 15%–25% improvement in convergence speed when compared to fixed and noiseless networks. The proposed method is evaluated on the MNIST dataset and other datasets from UCI repository. The comparative analysis confirms that the proposed method achieves superior performance regarding convergence speed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.