Abstract

One of the main decisions when making a digital design is which arithmetic is going to be used. The arithmetic determines the hardware resources needed and the latency of every operation. This is especially important in real-time applications like HIL (Hardware-in-the-loop), where a real-time simulation of a plant—power converter, mechanical system, or any other complex system—is accomplished. While a fixed-point gets optimal implementations, using considerably fewer resources and allowing smaller simulation steps, its use is very restricted to very specific applications, as its design effort is quite high. On the other side, IEEE-754 floating-point may have resolution problems in case of the 32-bit version, and excessive hardware usage in case of the 64-bit version. This paper presents LOCOFloat, a low-cost floating-point format designed for FPGA applications. Its key features are soft normalization of the results, using significand and exponent fields in two’s complement. This paper shows the implementation of addition, subtraction and multiplication of the proposed format. Both IEEE-754 versions and LOCOFloat are compared in this paper, implementing a HIL model of a buck converter. Although the application example is a HIL simulator, other applications could take benefit from the proposed format. Results show that LOCOFloat is as accurate as 64-bit floating-point, while reducing the use of DSPs blocks by 84 % .

Highlights

  • Most FPGA (Field Programmable Gate Array) designs must meet some area or speed constraints.usually a trade-off between both requirements has to be reached

  • This paper has proposed a format called LOCOFloat: Low-Cost Floating-point

  • The proposed format implements a floating-point library that offers the simplest operations needed in HIL: addition, subtraction and multiplication and soft normalization, with the aim of reducing the hardware requirements. It has been designed reducing the overhead of IEEE-754 floating-point standard which implements a lot of operators, with many special cases checking (e.g., NaN: Not a Number checks), rounding and hard normalization

Read more

Summary

Introduction

Usually a trade-off between both requirements has to be reached. Fixed-point provides an optimal approach in FPGAs, minimizing area and maximizing speed, so many examples in the literature use it [1,2,3,4,5,6]. For the designer it is easier to use floating-point, because it is not required to think in advance the numeric range required by the application. Another important reason to prefer floating-point over fixed-point is that the former optimizes its resolution using a couple of fields that defines the mantissa and exponent along with normalization, based on scientific notation

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call