Abstract

Artificial neural networks (ANNs) are widely used in different areas of nowadays applications. Many challenges are imposed on the practical implementation of ANNs. Some of them are: the number of samples required to train the network; the number of adders, multipliers, nonlinear transfer functions, storage elements; and the speed of calculations in either training phase or recall phase. In this paper, the RAM-based neural network is investigated. No weights, adders, multipliers, transfer functions are required to implement it neither in hardware nor in software, but at a cost of large RAM utilization. In addition, a small number of samples are required for training. However, in hardware implementation, a large size of memory is required to train it.The network is implemented on the FPGA platform. The Stratix IV GX FPGA development board, which is provided on large on board RAM, is used. A considerable speedup of 237is achievedin either training or recalling phases. A comparable error rate of 7.6 is achieved when MNIST (Mixed National Institute of Standards and Technology) database are used to train the network on handwritten digit recognition.

Highlights

  • Conventional artificial neural networks’ (ANNs) [1,2,3] constructions are built from the well-known weighted-sum-and-threshold artificial neurons called McCullogh and Pits which are comparatively simple processing units

  • Hardware implementation of conventional ANNs involves the use of large numbers of adders and multipliers by the artificial neurons [3] which in turn challenging for entirely parallel implementation of the networks

  • The aim of this paper is to implement a NN on a reconfigurable FPGA platform for handwritten digits recognition, and looking for a robust and speeding hardware design is of great importance

Read more

Summary

Introduction

Conventional artificial neural networks’ (ANNs) [1,2,3] constructions are built from the well-known weighted-sum-and-threshold artificial neurons called McCullogh and Pits which are comparatively simple processing units. These artificial neurons communicate with each other through a big set of weighted connections. The artificial neuron is depicted via two equations (1) and (2), the first one specifies a linear weighted sum of the inputs to the neuron and followed by the other one which represents a nonlinear activation function. Hardware implementation of conventional ANNs involves the use of large numbers of adders and multipliers by the artificial neurons [3] which in turn challenging for entirely parallel implementation of the networks

Objectives
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.