Abstract

Introduces a learning algorithm for multi-layered feedforward network-weight evolution algorithm with deterministic perturbation. During the learning phase of the gradient algorithm (such as backpropagation), the network weights are adjusted intentionally in order to have an improvement in system performance. The intention is to reduce the overall system error after every weight update. By looking at the error component, it is possible to adjust some of the network weights dramatically so as to have an overall reduction in system error. Using the deterministic perturbation, it is found that the weight evolution between the hidden and output layer can accelerate the convergence speed, whereas the weight evolution between the input layer and the hidden layer can assist in solving the local minima problem.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.