Abstract

Modern computational tasks are often required to not only guarantee predefined accuracy, but get the result fast. Optimizing calculations using floating point numbers, as opposed to integers, is a non-trivial task. For this reason, there is a need to explore new ways to improve such operations. This paper presents analysis and comparison of various floating point formats - float, posit and bfloat. One of the promising areas in which the problem of using such values can be considered to be the most acute is neural networks. That is why we pay special attention to algorithms of linear algebra and artificial intelligence to assess efficiency of new data types in this area. The research results showed that software implementations of posit16 and posit32 have high accuracy, but they are not particularly fast; on the other hand, bfloat16 is not much different from float32 in accuracy, but significantly surpasses it in performance for large amounts of data and complex machine learning algorithms. Thus, posit16 can be used in systems with less stringent performance requirements, as well as in conditions of limited computer memory; and also in cases when bfloat16 cannot provide required accuracy. As for bfloat16, it can speed up systems based on the IEEE 754 standard, but it cannot solve all the problems of conventional floating point arithmetic. Thus, although posits and bfloats are not a full fledged replacement for float, they provide (under certain conditions) advantages that can be useful for implementation of machine learning algorithms.

Highlights

  • With the growth of computer systems computing power, machine learning advanced significantly over the past decades, enabling developers from various subject areas to create stable and high-performance systems based on artificial intelligence technologies

  • Test infrastructure was created for evaluating performance of real numbers storage formats based on conventional algorithms, used in machine learning systems

  • We studied performance of posits and bfloats as compared with floating-point numbers in the IEEE 754 standard

Read more

Summary

INTRODUCTION

With the growth of computer systems computing power, machine learning advanced significantly over the past decades, enabling developers from various subject areas to create stable and high-performance systems based on artificial intelligence technologies. Based on the publications above, we can state that the posit format is actively investigated in various recent works on finding efficient solutions in such problems as matrix calculations, as well as training and implementation of neural networks. The experimental results, characterized by test accuracy and losses, show that bfloat covers a wide range of tensors in completely different networks and compares favorably to float in information processing accuracy, halving the memory space This data type does not provide significant improvements in accuracy when rounding data for recognition, which is emphasized by posit format developer [3]. Matrix calculations based on floating point numbers and bfloats have been investigated by Intel engineers using GEMM (General matrix multiply) algorithms as the basis for exploring prospects of low precision computations in comparison with standard data types using more than 16 bits [15]. As for performance of these formats, the question is still open

TEST SUITE DEVELOPMENT
EVALUATION
CONCLUSION
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.