Abstract

When designing an artificial neural network system in hardware, the implementation of the activation function is an important consideration. The hyperbolic tangent activation function is the most popular, and many approaches exist to approximate it, with varying trade-offs between area utilization and delay. Unfortunately, there is little data available reporting the minimum accuracy required of the activation function approximation in order to obtain good system-level performance; this is particularly the case for table-based approximation methods. In this paper, we demonstrate that table-based approximation methods are very well suited for implementing the tanh activation function, as well as its derivative in a variety of feed-forward artificial neural network topologies which employ the popular RPROP or Levenberg-Marquardt training algorithms. It is shown that when these training methods are used, an activation function possessing a relatively high maximum error can be used to obtain results comparable to floating point. This discovery suggests that these table-based methods can be employed with extreme efficiency in terms of area and speed, rendering them a promising option for any VLSI or FPGA artificial neural network hardware design.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.