Computer-based machine learning algorithms that produce impressive performance results are computationally demanding and thus subject to high energy consumption during training and testing. Therefore, compact neuro-inspired devices are required to achieve efficiency in hardware resource consumption for the smooth implementation of neural network applications that require low energy and area. In this paper, learning characteristics and performances of the nanoscale titanium dioxide (TiO2) based synaptic device have been analyzed by implementing it in the hardware-based neural network for digit classification. Our model is experimentally validated by using 32-nm CMOS technology and the results demonstrate that the model provides high computational ability with better accuracy and efficiency in resource consumption with low energy and less area. The proposed model exhibits 20% energy gain and 16.82% accuracy improvement and 18% less total latency compared to the state-of-the-art Ag:Si synaptic device-based neural network. Furthermore, when compared to the software-based (i.e., computer-based) implementation of neural networks, our TiO2-based model not only achieved an impressive accuracy rate of 90.01% on the MNIST dataset but also did so with reduced energy consumption. Consequently, our model, characterized by a low hardware implementation cost, emerges as a promising neuro-inspired hardware solution for various neural network applications. The proposed model has further demonstrated outstanding performance in experiments involving both the MNIST and Fisher’s Iris datasets. On the latter dataset, the model exhibited notable precision (94.5%), recall (91.5%), and an impressive F1-score (92.9%), accompanied by a commendable accuracy rate of 93.04%.
Read full abstract