Abstract
A potentially simplified training strategy for feed forward type neural networks is developed in view of VLSI implementation. The gradient descent back propagation technique is simplified to train stochastic type neural hardware. The proposed learning algorithm uses ADD, SUBTRACT and LOGICAL operations only. This reduces circuit complexity with an increase in speed. The forward and reverse characteristics of perceptrons are generated using random threshold logic. The proposed hardware consists of 31 perceptrons per layer working in parallel with a programmable number of layers working in sequential mode.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have