Abstract

A new method for the parallel hardware implementation of artificial neural networks (ANNs) using digital techniques is presented. Signals are represented using uniformly weighted single-bit streams. Techniques for generating bit streams from analog or multibit inputs are also presented. This single-bit representation offers significant advantages over multibit representations since they mitigate the fan-in and fan-out issues which are typical to distributed systems. To process these bit streams using ANNs concepts, functional elements which perform summing, scaling, and squashing have been implemented. These elements are modular and have been designed such that they can be easily interconnected. Two new architectures which act as monotonically increasing differentiable nonlinear squashing functions have also been presented. Using these functional elements, a multilayer perceptron (MLP) can be easily constructed. Two examples successfully demonstrate the use of bit streams in the implementation of ANNs. Since every functional element is individually instantiated, the implementation is genuinely parallel. The results clearly show that this bit-stream technique is viable for the hardware implementation of a variety of distributed systems and for ANNs in particular.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.