Abstract

This work aimed to enhance a previous neural network hardware implementation based on an efficient combination of Stochastic Computing (SC) and Morphological Neural Networks (MNN). This enhancement focused on exploiting the natural ease of morphological neurons to be pruned in order to drastically shrink the hardware resources and increase the compactness of our network. That is why we extended our original hybrid two-layer neural network to classify the MNIST problem, a much more demanding benchmark with about 160,000 trainable parameters. The 92% of the weights of the morphological layer were discarded, allowing a drastic shrinkage of the hardware resources and power dissipation without degrading test accuracy. An extensive comparison with other recent neural networks designs shows that the proposed design achieves significant improvements in terms of energy efficiency, throughput and hardware resources.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call