Abstract

Graphics Processing Units (GPUs) offer the possibility to execute floating-point operations (FLOP) with mixed-precisions such as INT8, FP16, Bfloat, FP32, and FP64. For Deep Neural Networks (DNNs), a reduced precision is likely to lower the execution time and power consumption as it requires a smaller hardware area and fewer clock cycles to perform instructions than the standard FP32 and FP64 precisions. As less area is needed for reduced precision, the circuit error rate is also expected to be lower [1]. NVIDIA GPUs also have tensor cores that perform matrix multiplication on hardware. The tensor cores are capable to perform a 4 ×4 FP16 matrix multiplication in one clock cycle [2]. The tensor cores can deliver up to 9 × higher performance than the software implementation of matrix multiplication (sequence of sums and multiplications) on GPUs and up to 47 ×than a CPU-based system [2].

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call