Abstract

The introduction of the new methods consists of three parts. To obtain a result on a computer, the correctness of which is verified to be correct, a precisely defined computer arithmetic is indispensable. Moreover, a computer arithmetic with maximum accurate results for any single operation would be desirable. The definition of maximum accuracy for a computed result is easy, the theoretical and practical foundation is a new theory developed by U. Kulisch and W. Miranker (of. [2]) to be described in the following. A computer result is an approximation of the precisely defined real result by a floating-point number. Although the real (infinite precise) result is in general not computable, it may serve to define the term maximum accuracy. As long as no overflow or underflow occurs, there are two cases. First, the precise result may be exactly a machine (floating-point) number. Second, there are two floating-point neighbours left and right to the precise real result. In the first case the maximum accurate floating-point result is obviously the exact result which happens to be a machine number. In the second case both floating-point neighbours of the precise result are of maximum accuracy and it depends on the rounding mode which result will be delivered. There are four essential rounding modes, namely rounding to nearest, rounding downwards, rounding upwards and rounding towards zero. For rounding to the nearest floating-point number there is the special case, that the precise result is exactly the midpoint of the two floating-point neighbours. In this case the result is the floating-point number of larger magnitude. The definition of maximum accurate floating-point operations is clear, but the theoretical basis and implementation is not trivial. One reason for this is that the exact, infinite precise result is in general not known.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call