Abstract

Approximate computing (AC) techniques provide overall performance gains in terms of power and energy savings at the cost of minor loss in application accuracy. For this reason, AC has emerged as a viable method for efficiently supporting several compute-intensive applications, e.g., machine learning, deep learning, and image processing, that can tolerate bounded errors in computations. However, most prior techniques do not consider the possibility of soft errors or malicious bit-flips in AC systems. These errors may interact with approximation-introduced errors in unforeseen ways, leading to disastrous consequences, such as the failure of computing systems. A recent research effort, FTApprox (DATE’21) proposes an error-resilient approximate data format. FTApprox stores two blocks, starting from the one containing the most significant valid (MSV) bit. It also stores location of the MSV block and protects them using error-correcting bits (ECBs). However, FTApprox has crucial limitations such as lack of flexibility, redundantly storing zeros in the MSV, etc. In this paper, we propose a novel storage format named Versatile Approximate Data Format (VADF) for storing approximate integer numbers while providing resilience to soft errors. VADF prescribes rules for storing, for example, a 32-bit number in either 8-bit, 12-bit or 16-bit numbers. VADF identifies the MSV bit and stores a certain number of bits following the MSV bit. It also stores the location of the MSV bit and protects it by ECBs. VADF does not explicitly store the MSB bit itself and this prevents VADF from accruing significant errors. VADF incurs lower error than both truncation methodologies and FTApprox. We further evaluate five image-processing and machine-learning applications and confirm that VADF provides higher application quality than FTApprox in the presence and absence of soft errors. Finally, VADF allows the use of narrow arithmetic units. For example, instead of using a 32-bit multiplier/adder, one can first use VADF (or FTApprox) to compress the data and then use a 8-bit multiplier/adder. Through this approach, VADF facilitates 95.97% and 79.3% energy savings in multiplication and addition, respectively. However, the subsequent re-conversion of the 8-bit output data to 32-bit data using Inv-VADF(16,3,32) diminishes the energy savings by 9.6% for addition and 0.56% for multiplication operation, respectively. The code is available at https://github.com/CandleLabAI/VADF-ApproximateDataFormat-TECS .

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.