Abstract

In this brief, we study the impact of input data distribution on temporal approximation (TA) in floating point units (FPUs). In TA, rather than performing computations, prior computed results are used as output to introduce approximation. Thus, temporal locality of inputs plays an important role in TA. We show that efficacy of TA is strongly dependent on the input data distribution. While in prior works, uniform random input data distribution is used to perform the worst case analysis in approximate FPUs, it fails to capture the worst case for TA. We show that contrary to conventional idea, input data samples from normal distribution with mean (μ) equal to zero captures the worst case for TA irrespective of the FP operations. We evaluated TA in FP multipliers and FP dividers by studying four different data distributions: a) Normal Distribution (μ = 0) b) Normal Distribution (μ = 1) c) Uniform Distribution d) Power Law Distribution. The inputs generated by sampling from these distributions were applied to algorithms- dot product, principal component analysis, page rank, vector normalizations, and dot divide. On average, normal distribution (μ = 0) is more efficient in capturing the worst case as compared to the widely used uniform distribution by 8% and 12% in for FP multiplier and FP dividers respectively. We also highlight that prior knowledge of input data distribution can be exploited to reduce power delay product.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call