Abstract

AbstractThis paper considers the neural network in which the initial values for the weights and the bias are given by random numbers as in usual cases. The results of BP learning in networks composed of unipolar units having an activity range from 0 to 1 and networks with bipolar units with a range from −0.5 to 0.5 are compared.When the input space is large, the separation hyperplane at the outset of learning passes near the center of the input space in the bipolar case, while that in the unipolar case passes near the vertex. Because of this property, the number of separation hyperplanes that effectively separate the input spaces of the layers during the updating or realization of the solution is larger in the bipolar case than in the unipolar case. The difference between the two becomes more remarkable with the increase of size. As a result of simulation, it is verified that the learning by the bipolar network gives better convergence for a wider range of initial values than the learning by the unipolar network when the network is large. It is shown also that the kinds of solution obtained by the unipolar network tend to be deviated.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call