Abstract

Software engineers use different compilers and code optimisation levels (e.g., O2 and Os) to achieve the best results considering distinguish constraints (e.g., power, performance and latency). Compilers and code optimisations have specific characteristics that directly impact applications' code footprint, performance, power efficiency, and reliability. In this scenario, this paper investigates the impact of widely adopted compilers on the soft error reliability of convolutional neural network (CNN) inference models executing on a RISC-V (rv32i) processor. Fault injection campaigns consider two fault targets (registers and memory), two open-source compilers (GCC 8.1.0 and Clang 12.0.1), five code optimisation levels, and two CNN inference models with 3 and 28 layers, resulting in 680k fault injections. Results show that optimisation flags can lead to more than two orders of magnitude increase in the occurrence of critical faults. Gathered results also show that the 3-layer CNN model is more susceptible to register faults. In contrast, the 28-layer model presents a higher susceptibility to memory bit-flip incidents.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call