Abstract

Inspired by Hebb's cell assembly theory about how the brain worked, we have developed a function localization neural network (FLNN). The main part of a FLNN is structurally the same as an ordinary feedforward neural network, but it is considered to consist of several overlapping modules, which are switched according to input patterns. A FLNN constructed in this way has been shown to have better representation ability than an ordinary neural network. However, BP training algorithm for such FLNN is very easy to get stuck at a local minimum. In this paper, we mainly discuss the methods for improving BP training of the FLNN by utilizing the structural property of the network. Two methods are proposed. Numerical simulations are used to show the effectiveness of the improved BP training methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call