Abstract

We have performed different simulation experiments in relation to hardware neural networks (NN) to analyze the role of the number of synapses for different NN architectures in the network accuracy, considering different datasets. A technology that stands upon 4-kbit 1T1R ReRAM arrays, where resistive switching devices based on HfO2 dielectrics are employed, is taken as a reference. In our study, fully dense (FdNN) and convolutional neural networks (CNN) were considered, where the NN size in terms of the number of synapses and of hidden layer neurons were varied. CNNs work better when the number of synapses to be used is limited. If quantized synaptic weights are included, we observed that NN accuracy decreases significantly as the number of synapses is reduced; in this respect, a trade-off between the number of synapses and the NN accuracy has to be achieved. Consequently, the CNN architecture must be carefully designed; in particular, it was noticed that different datasets need specific architectures according to their complexity to achieve good results. It was shown that due to the number of variables that can be changed in the optimization of a NN hardware implementation, a specific solution has to be worked in each case in terms of synaptic weight levels, NN architecture, etc.

Highlights

  • Neuromorphic engineering is a booming research field, taking into consideration its connections to Artificial Intelligence (AI) applications [1]

  • We have analyzed the architecture of neural networks for hardware implementation by means of simulation experiments to assess the connection between the NN number of synapses and the accuracy

  • The results show that convolutional neural networks (CNN) work better when the number of synapses to be used is limited

Read more

Summary

Introduction

Neuromorphic engineering is a booming research field, taking into consideration its connections to Artificial Intelligence (AI) applications [1]. The resources available at current data centers allow the lengthy training processes of these networks. The limitations of computing architectures, influenced by the slowing down of Moore’s law for device scaling, and mostly, by the physical separation of processor and memory units (von Neumann’s bottleneck), that causes an enormous burden to the computer operation since data have to shuttle back and forth between these units. The memory wall problem (the rising performance gap between memory and processors) contributes to slow down AI-oriented applications [1,2]. The training process of state-of-the-art NNs has a huge carbon footprint, as shown in Ref. The training process of state-of-the-art NNs has a huge carbon footprint, as shown in Ref. [3]

Objectives
Methods
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.