Abstract

As scientific computation continues to scale, it is crucial to use floating-point arithmetic processors as efficiently as possible. Lower precision allows streaming architectures to perform more operations per second and can reduce memory bandwidth pressure on all architectures. However, using a precision that is too low for a given algorithm and data set will result in inaccurate results. Thus, developers must balance speed and accuracy when choosing the floating-point precision of their subroutines and data structures. We are building tools to help developers learn about the runtime floating-point behavior of their programs, and to help them make implementation decisions regarding this behavior. We propose a tool that performs automatic binary instrumentation of floating-point code to detect mathematical cancellations. In particular, we show how our prototype can detect the variation in cancellation patterns for different pivoting strategies in Gaussian elimination, as well as how our prototype can detect a program’s sensitivity to ill-conditioned input sets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call