Abstract
Discusses the weight update rule in the cascade correlation neural net learning algorithm. The weight update rule implements gradient descent optimization of the correlation between a new hidden unit's output and the previous network's error. The author presents a derivation of the gradient of the correlation function and shows that his resulting weight update rule results in slightly faster training. The author also shows that the new rule is mathematically equivalent to the one presented in the original cascade correlation paper and discusses numerical issues underlying the difference in performance. Since a derivation of the cascade correlation weight update rule was not published, this paper should be useful to those who wish to understand the rule.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.