Abstract

Today, almost all computer systems use IEEE-754 floating point to represent real numbers. Recently, posit was proposed as an alternative to IEEE-754 floating point as it has better accuracy and a larger dynamic range. The configurable nature of posit, with varying number of regime and exponent bits, has acted as a deterrent to its adoption. To overcome this shortcoming, we propose <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">fixed-posit</i> representation where the number of regime and exponent bits are fixed, and present the design of a fixed-posit multiplier. We evaluate the fixed-posit multiplier on error-resilient applications of AxBench and OpenBLAS benchmarks as well as neural networks. The proposed fixed-posit multiplier has 47%, 38.5%, 22% savings for power, area and delay respectively when compared to posit multipliers and up to 70%, 66%, 26% savings in power, area and delay respectively when compared to 32-bit IEEE-754 multiplier. These savings are accompanied with minimal output quality loss (1.2% average relative error) across OpenBLAS and AxBench workloads. Further, for neural networks like ResNet-18 on ImageNet we observe a negligible accuracy loss (0.12%) on using the fixed-posit multiplier.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.