Abstract

Convolutional neural network models in the field of computer vision are also emerging in endlessly. From LeNet in 1998, to AlexNet, which detonated the deep learning boom in 2012, to VGG<sup>[1]</sup> in 2014, and ResNet<sup>[2]</sup> in 2015, the application of deep learning network models in image processing is getting better and better. Neural networks are getting larger and larger, their structures are becoming more and more complex, and the hardware resources required for prediction and training are gradually increasing. Often, deep learning neural network models can only be run on servers with high computing power. Deep convolutional neural networks have improved the performance of multiple computer vision tasks to a new level. The overall trend is to build deeper and more complex networks in order to achieve higher accuracy, but these networks may not necessarily satisfy mobile devices in terms of scale and speed requirements. Due to the limitations of hardware resources and computing power, it is difficult for mobile devices to run complex deep learning network models. Efforts are also being made in the field of deep learning to promote the miniaturization of neural networks. While ensuring the accuracy of the model, it is smaller and faster. From 2016 until now, the industry has proposed lightweight network models such as SqueezeNet<sup>[3]</sup>, ShuffleNet V1-V2<sup>[4-5]</sup>, MobileNet V1-V3<sup>[6-8]</sup>, and InceptionV1-V4<sup>[9-12]</sup>. These models make it possible for mobile terminals and embedded devices to run neural network models. Therefore, this article compares the mainstream lightweight neural network models, analyzes and contrasts various networks, and looks forward to the future development trend of lightweight neural networks and several directions that can be studied.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call