Abstract

This survey presents a comprehensive analysis of the potential benefits and challenges of training deep neural networks (DNNs) on CPUs, summarizing existing research in the field. Five distinct DNN models are examined: Ternary Neural Networks (TNNs), Binary Neural Networks (BNNs), Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and a novel method called Sub-Linear Deep Learning Engine (SLIDE), specifically designed for CPU-based network training. The survey emphasizes the advantages of using CPUs for DNN training, such as low cost, compact size, and broad applicability across various domains. Furthermore, the survey collects concerns related to CPU acceleration, including the absence of a unified programming model and the inefficiencies in DNN training due to increased floating-point operations. The survey explores algorithmic and hardware optimization strategies, incorporating compressed network structures, innovative techniques like SLIDE, and the RISC-V instruction set to tackle these issues. According to the survey, CPUs are more likely to become the alternative for developers with limited resources in the future. Through continued algorithm optimization and hardware enhancements, CPUs can provide more cost-efficient neural network training solutions, excelling in areas such as mobile servers and edge computing.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call