Abstract

Deep convolutional neural networks have achieved remarkable performance on object detection tasks. Regression based models include YOLO and SSD are faster and more accurate, but they still run slowly on devices with limited computational and memory resources. Many work focus on network pruning so that they can be effectively deployed. However, most of them require expert knowledge and massive experiments to determine which layers to prune, and how many channels to prune in every single layer. In this paper, we firstly define the importance score of a channel as the product of its L1-norm of weights and scaling factor of batch normalization layer. Then we adopt Lasso regression and fine-grained regularization coefficient to learn sparse scaling factor. By pruning channels with small importance score then retraining, we can get a compact model without significant accuracy drop. Compared to other rule-based pruning method, our method achieves higher accuracy on Pascal VOC2007 dataset while requires less human effort on channel selection.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call