Abstract

The sigmoid function is a widely used nonlinear activation function in neural networks. In this article, we present a modular approximation methodology for efficient fixed-point hardware implementation of the sigmoid function. Our design consists of three modules: piecewise linear (PWL) approximation as the initial solution, Taylor series approximation of the exponential function, and Newton–Raphson method-based approximation as the final solution. Its modularity enables the designer to flexibly choose the most appropriate approximation method for each module separately. Performance evaluation results indicate that our work strikes an appropriate balance among the objectives of approximation accuracy, hardware resource utilization, and performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call