Abstract

Artificial intelligence (AI) has recently regained a lot of attention and investment due to the availability of massive amounts of data and the rapid rise in computing power. Many problems in academia and industry have been solved using machine learning (ML) methodologies. While the proliferation of big data applications keeps driving machine learning development, it also poses significant challenges to traditional computer systems in terms of data scalability and processing speed. Multicore processors and accelerators have paved the way for more machine learning approaches to be explored and applied to a wide range of applications. These advances, combined with the reversal of other trends, such as Moore's Law, have resulted in a flood of processors and accelerators promising even more computing and machine learning power. These processors and accelerators come in a variety of shapes and sizes, ranging from CPUs and Graphic Processing Unit (GPUs) to Vision Processing Unit (VPU), Tensor Processing Unit (TPU), Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (ASICs) and dataflow accelerators. In this paper we explore the different machine learning accelerators, including their performance and power consumption figures. This comprehensive study is all about conducting a critical analysis of all of the aforementioned machine learning accelerators in order to determine which specialized accelerator provides the highest overall throughput and efficiency when performing various tasks meanwhile, keeping an eye on each accelerator's power consumption.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call