Abstract

In this paper, a VLSI design of a bit-serial artificial neuron circuit is proposed. Different from the ordinary bit-serial architectures which usually start from the least significant bit (LSB), the proposed design will start from processing the most significant bit (MSB). For the MSB-first approach, the more significant part of results will be generated earlier, and the intermediate results will be progressively refined by processing the less significant bits. An artificial neuron is equipped with an activation function at the output, and many common used activation functions such as sigmoid, a rectified linear unit (ReLU) etc will saturate to 0 for large negative inputs. Some will saturate to 1 for large positive inputs. Therefore, when the intermediate results are positive or negative enough, the remaining processing of less significant bits can be neglected. Our preliminary results shows that the approximation results due to the proposed early termination can still lead to the same classification accuracy as the full precision, but the processing cycles can be reduced by more than 25%. The proposed methodology can be applied to the design of hardware accelerators for those machine learning networks based on neurons such as neural network (NN) and convolution NN (CNN).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call