Abstract

The increased demand for better accuracy and precision and wider data size has strained current the floating point system and motivated the development of the POSIT system. The POSIT system supports flexible formats and tapered precision and provides equivalent accuracy with fewer bits. This paper examines the POSIT and floating point systems, comparing the performance of 32-bit POSIT and 32-bit floating point systems using IIR notch filter implementation. Given that the bulk of the calculations in the filter are multiplication operations, an Enhanced Radix-4 Modified Booth Multiplier (ERMBM) is implemented to increase the calculation speed and efficiency. ERMBM enhances area, speed, power, and energy compared to the POSIT regular multiplier by 26.80%, 51.97%, 0.54%, and 52.22%, respectively, without affecting the accuracy. Moreover, the Taylor series technique is adopted to implement the division operation along with cosine arithmetic unit for POSIT numbers. After comparing POSIT with floating point, the accuracy of POSIT is 92.31%, which is better than floating point’s accuracy of 23.08%. Moreover, POSIT reduces area by 21.77% while increasing the delay. However, when the ERMBM is utilized instead of the POSIT regular multiplier in implementing the filter, POSIT outperforms floating point in all the performance metrics including area, speed, power, and energy by 35.68%, 20.66%, 31.49%, and 45.64%, respectively.

Highlights

  • Received: 16 November 2021The discovery of deep neural networks, an increase in data size, and a high demand for better accuracy and precision mean that the standard floating point (FP) system will not be efficient enough to meet specified requirements

  • The focus of this paper is to demonstrate that the POSIT arithmetic unit outperforms floating point in accuracy, area, speed, power, and energy

  • POSIT is compared with floating point before and after enhancing POSIT arithmetic units to conduct a fair comparison between the two numbering systems

Read more

Summary

Introduction

Received: 16 November 2021The discovery of deep neural networks, an increase in data size, and a high demand for better accuracy and precision mean that the standard floating point (FP) system will not be efficient enough to meet specified requirements. The bit width of the three fields in FP format is fixed which results in redundant bits in both the mantissa and the exponent. In floating point format, bits are wasted for exceptions, including NaNs (not a number). This exception case represents illegal mathematical operations, including dividing a number by a zero [1]. Unums, which stands for universal numbers, was invented by John Gustafson as an alternative for representing real numbers using a finite number of bits. The existence of the exponent and mantissa bits is not always necessary for representing numbers in POSIT. What distinguishes POSIT from floating point is the presence of an extra field

Objectives
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.