Abstract

Matrix multiplication (MxM) is a cornerstone application for both high-performance computing and safety-critical applications. Most of the operations in convolutional neural networks for object detection, in fact, are MxM related. Chip designers are proposing novel solutions to improve the efficiency of the execution of MxM. In this article, we investigate the impact of two novel architectures for MxM (i.e., tensor cores and mixed precision) on the graphics processing units (GPUs) reliability. In addition, we evaluate how effective the embedded error-correcting code is in reducing the MxM error rate. Our results show that low-precision operations are more reliable, and the tensor core increases the amount of data correctly produced by the GPU. However, reducing precision and the use of tensor core significantly increase the impact of faults in the output correctness.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.