Abstract

Multilayer feed-forward neural networks are widely used based on minimization of an error function. Back-propagation is a famous training method used in the multilayer networks but it often suffers from a local minima problem. To avoid this problem, we propose a new back-propagation training based on chaos. We investigate whether randomicity and ergodicity property of chaos can enable the learning algorithm to escape from local minima. Validity of the proposed method is examined by performing simulations on three real classification tasks, namely, the Ionosphere, the Wincson Breast Cancer (WBC), and the credit-screening datasets. The algorithm is shown to work better than the original back-propagation and is comparable with the Levenberg-Marquardt algorithm, but simpler and easier to implement comparing to Levenberg-Marquardt algorithm.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.