Abstract

Verification of programs using floating-point arithmetic is challenging on several accounts. One of the difficulties of reasoning about such programs is due to the peculiarities of floating-point arithmetic: rounding errors, infinities, non-numeric objects (NaNs), signed zeroes, denormal numbers, different rounding modes, etc. One possibility to reason about floating-point arithmetic is to model a program computation path by means of a set of ternary constraints of the form and use constraint propagation techniques to infer new information on the variables’ possible values. In this setting, we define and prove the correctness of algorithms to precisely bound the value of one of the variables x, y or z, starting from the bounds known for the other two. We do this for each of the operations and for each rounding mode defined by the IEEE 754 binary floating-point standard, even in the case the rounding mode in effect is only partially known. This is the first time that such so-called filtering algorithms are defined and their correctness is formally proved. This is an important slab for paving the way to formal verification of programs that use floating-point arithmetics.

Highlights

  • Programs using floating-point numbers are notoriously difficult to reason about [33]

  • Many factors complicate the task: 1. compilers may transform the code in a way that does not preserve the semantics of floating-point computations; 2. floating-point formats are an implementation-defined aspect of most programming languages; 3. there are different, incompatible implementations of the operations for the same floating-point format; 4. mathematical libraries often come with little or no guarantee about what is computed; 5. programmers have a hard time predicting and avoiding phenomena caused by the limited range and precision of floating-point numbers; devices that modern floating-point formats possess in order to support better handling of such phenomena

  • With the increasing use of floating-point computations in mission- and safety-critical settings, the issue of reliably verifying their correctness has risen to a point in which testing or other informal techniques are not acceptable any more

Read more

Summary

Introduction

Programs using floating-point numbers are notoriously difficult to reason about [33]. Compilers may transform the code in a way that does not preserve the semantics of floating-point computations; 2. Floating-point formats are an implementation-defined aspect of most programming languages; 3. There are different, incompatible implementations of the operations for the same floating-point format; 4. As a result of these difficulties, the verification of floating-point programs in industry relies, almost exclusively, on informal methods, mainly testing, or on the evaluation of the numerical accuracy of computations, which only allows to determine conservative (but often too loose) bounds on the propagated error [19]. Some compilers provide options to refrain from rearranging floating-point computations. When these are not available or cannot be used, the only possibility is to verify the generated machine code or some intermediate code whose semantics is guaranteed to be preserved by the compiler back-end

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call