Abstract

As a powerful machine learning method, deep learning has attracted the attention of numerous researchers. While exploring a high-performance neural network model, the floating-point operations of a neural network model are also increasing. In recent years, many researchers have noticed that efficiency is also one of important indicators to measure the property of neural network models. Obviously, the efficient neural network model is more helpful to deploy on mobile and embedded devices. Therefore, the efficient neural network model becomes a hot research spot. In this paper, we review the methods related to the structural design of efficient convolution neural networks in recent years. According to the characteristics of these methods, we divide them into three kinds of methods: model pruning, efficient architecture, and neural architecture search. Detailed analyses of each method are presented to demonstrate their advantages and disadvantages. Then, we comprehensively compare them in detail and propose many suggestions about the design of the efficient convolution neural network model structure. Inspired by these suggestions, we built a new efficient neural network model, SharedNet. And the SharedNet obtains the best accuracy of manually-designed efficient CNN models on the ImageNet dataset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call