Abstract

Area and Noise to Signal Ratio (NSR) are two main factors in hardware implementation of neural networks. Despite attempts to reduce the area of sigmoid and hyperbolic tangent activation functions, they cannot achieve the efficiency of threshold activation function. A new NSR efficient architecture for threshold networks is proposed in this paper. The proposed architecture uses different number of bits for weight storage in different layers. The optimum number of bits for each layer is found based on the mathematical derivation using stochastic model. Network training is done using the recently introduced learning algorithm called Extreme Learning Machine (ELM). A 4-7-4 network is considered as a case study and its hardware implementation for different weight accuracies is investigated. The proposed design is more efficient considering area × NSR as a performance metric. VLSI implementation of the proposed architecture using a 0.18 μm CMOS process is presented which shows 44.16%, 58.04 % and 67.30% improvement for total number of bits equal to 16, 20 and 24.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call