Abstract

Artificial Intelligence (AI) has brought great convenience and help to human society by improving efficiency, increasing productivity and reducing cost. As a part of deep learning, convolutional neural networks (CNNs) have been widely concerned by researchers in recent years. In this paper, five parts of the CNN structure, including input layer, convolutional layer, pooling layer, fully connected layer, activation function and output layer are going to be elaborated. Besides, four different kinds of hardware, which can be used in AI implementation, including graphics processing unit (GPU), field programmable gate array (FPGA), application-specific integrated circuit (ASIC) and brain-like chips will be discussed in this paper. Based on the different characteristics of these four groups of hardware, this paper will analyze their feasibility to implement artificial intelligence algorithm. After contrasting their cost, flexibility and power consumption, it is concluded that different hardware has different advantages to implement AI under different circumstances. GPU performs better to handle with parallel operations or construct complex network models of AI. FPGA is able to achieve flexible AI model programming. On the other hand, ASIC is preferred considering its low power consumption and cost to implement AI. Although brain-like chip is not as well developed as the other three chips, it is promising to implement AI in the future.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call