Abstract

Deep learning develops a significant breakthrough in numerous Artificial Intelligence (AI) applications. It is widely used in many state-of-the-art applications such as image classification, speech recognition, object detection and market prediction. However, deep learning models require extensive calculations and high computation power to perform a task. Therefore, there is a need to explore efficient hardware for Deep Neural Networks (DNNs) without affecting accuracy and its output. Hardware cost is also playing an essential role to affect the DNNs usage in portable or standalone devices. In this paper, we aim to providing detailed survey about recent trends in efficient hardware processing of DNNs. The overview of deep learning approaches, various hardware architectures and key advances in efficient computation of DNNs via reducing hardware size and changing hardware driven algorithms is also analyzed. Furthermore, we have explained the key comparisons, tradeoffs between different hardware platforms and various deep learning techniques for the efficient utilization of resources.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call