Abstract
Deep neural networks (DNN) achieve great results in many fields. While softmax function is widely used in DNN, how to implement it on hardware considering the accuracy, speed, area, and power, is a critical issue. This paper proposes a piecewise exponential lookup table (LUT) method, which reduces the LUT size of the exponential function in DNN's softmax layer. The experiment results show that the hardware using this method consumes less area and power resources than the previous work. The experiment input has a wide range and high accuracy, the absolute error of the calculation result is up to 4.5×10 <sup xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">-6</sup> . The experiment results prove the proposed design is suitable for the softmax layer in most hardware implementation of DNN.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have