Abstract

In recent years, interest in approximate computing has been increasing significantly in many disciplines in the context of saving energy and computation cost by trading off on the quality of numerical simulation. The hardware acceleration based on the low-precision floating-point arithmetic is anticipated by the upcoming generation of microprocessors and code compilers and has already proven to be beneficial for weather and climate modelling and neural network training. The present work illustrates the application of low-precision arithmetic for the nuclear reactor core uncertainty analysis. We studied the performance of an elementary transient reactor core model for the arbitrary precision of the floating-point multiplication in a direct linear system solver. Using this model, we calculated the reactor core transients initiated by the control rod ejection taking into account the uncertainty of the model input parameters. Then, we evaluated the round-off errors of the model outputs for different precision levels. The comparison of the round-off errors and the model uncertainty showed the model could be run using a 15-bit floating-point number precision with an acceptable degradation of the result’s accuracy. This precision corresponds to a gain of about 6× in the bit complexity of the linear system solution algorithm, which can be actualized in terms of reduced energy costs on low-precision hardware.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call