Abstract

FPGA implementation of hyperbolic tangent activation function for multilayer perceptron structure seems attractive; however, there is a lack of preliminary results on the choice of memory size particularly, when LUT of the function is stored in dedicated on-chip block RAM. The aim of this investigation was to get insights on the distortions of the selected neuron model output by the evaluation of transfer function RMS error and neuron output signal mean and maximum errors while changing the gain and memory size of the activation function. Thus, the range addressable activation function for the second order normalized lattice-ladder neuron was implemented in Artix-7 FPGA. Various gain and memory constrains were investigated. The increase of LUT memory size and gain yielded smaller error of output signal and nonlinear influence on the transfer function. 2 kB of BRAM is sufficient to achieve tolerable less than 0.4 % maximum error utilizing only 0.36 % of total on-chip block memory. DOI: http://dx.doi.org/10.5755/j01.eie.22.2.14598

Highlights

  • Artificial neural network implementation in FPGA is attractive because of the hardware parallel and periodical structure, fast reconfigurability, hundreds of dedicated DSP and memory slices, convenient high-level synthesis tools [1]

  • The neuron activation functions such as sigmoid [2], logarithmic sigmoid [3] or hyperbolic tangent [4] are mostly used in the artificial neural networks

  • This paper presents FPGA implementation of range addressable look-up tables (LUT) approximation of hyperbolic tangent function

Read more

Summary

INTRODUCTION

Artificial neural network implementation in FPGA is attractive because of the hardware parallel and periodical structure, fast reconfigurability, hundreds of dedicated DSP and memory slices, convenient high-level synthesis tools [1]. When only few bits are used as the input of activation function, it makes sense to use combinational approximation based on direct bit level mapping without arithmetic operators. It is shown in [5] that with 6 bits precision the maximal absolute error of the activation function is less than 1 %. The maximum allowable 2 % error with 9 bit input is achieved using hybrid PWL and LUT methods in [12] implementing hyperbolic tangent function in hardware. The accuracy of approximated hyperbolic tangent under various precision input signal and LUT size was investigated in [11].

IMPLEMENTATION OF LATTICE-LADDER NEURON AND ITS NONLINEAR ACTIVATION FUNCTION
EVALUATION CRITERIA
RESULTS
CONCLUSIONS

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.