Abstract

Determining an effective architecture for multi-layer feedforward backpropagation neural networks can be a time-consuming effort. In general it requires human intervention in determining the number of layers, number of hidden cells, the learning rule and the learning parameters. Over the past few years several approaches to dynamically configure neural networks have been proposed, which remove most of the responsibility for choosing the correct network configuration from the user. As important as finding a viable network architecture for some given learning problem, is the need to obtain a minimal configuration. The total time required to emulate or simulate neural networks is largely dependent on the number of connections present in a network. Therefore, it is essential to provide pruning methods to reduce network complexity. In this paper two approaches to network pruning will be investigated: single- and multi-pass pruning. Their effectiveness is emphasized by applying them to several real world problem domains and comparing them to a modification of Optimal Brain Damage. It will be shown that multi-pass pruning is not only fast, but also effective in reducing the network size when used in conjunction with Divide & Conquer Learning. Finally, it is general enough to be applied to other dynamically network configuring learning approaches.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call