AbstractNeural networks are widely applied to information processing because of their nonlinear processing capability. Digital hardware implementation of neural networks seems to be effective in the construction of neural network systems in which real‐time operation and much broader applications are possible. However, the digital hardware implementation of analog neural networks is very difficult because of the need to satisfy restrictions concerning circuit resources such as circuit scale, arrangement, and wiring. A technique that uses a pulsed neuron model instead of an analog neuron model as a method for solving this problem has been proposed, and its effectiveness has been confirmed. To construct pulsed neural networks (PNN), backpropagation (BP) learning has been proposed. However, BP learning takes considerable time to construct a PNN compared with the learning of an analog neural network. Therefore, some method of speeding up BP learning in PNN is necessary. In this paper, we propose a fast BP learning method using optimization of the learning rate for a PNN. In the proposed method, the learning rate is optimized so as to speed up learning in every learning cycle. To evaluate the proposed method, we apply it to pattern recognition problems, such as XOR, 3‐bit parity, and digit recognition. The results of the computer‐based experiments demonstrate the validity of the proposed method. © 2011 Wiley Periodicals, Inc. Electron Comm Jpn, 94(7): 27–34, 2011; Published online in Wiley Online Library (wileyonlinelibrary.com). DOI 10.1002/ecj.10249
Read full abstract