Abstract

Deep neural networks (DNNs) is very important and have achieved remarkable accuracies in tasks such as image processing. However, the success of DNNs heavily relies on excessive computation and parameter storage costs. To cut down the overheads, a wide range of regularization terms are proposed on netwok compression, which have their own scope of application though. To further reduce computation, structural sparsity learning is a matter of great concern where group sparse regularizations are blossoming in radiant splendour. However, the group sparse regularizations used in network compression is relatively scarce. A majority of structural compression only stem from ℓ2,1 regularization. In addition, sparse regularization in structural network compression lacks unified form and theoretical guidance. Therefore, we focus our attention on a generalized sparse regularization in sparse learning of deep learning. We put a series of sparse regularization into a framework where some group sparse regularizations with different properties are introduced. The transformation of regularizations can be completed through the selection of hyper-parameters. In this case, we can use the optimization strategy for unification and theoretical guidance, and transform different sparse regularizations to adapt to different tasks. To our knowledge, it is the first work applying the generalized sparse regularization with novel group sparsity in compression of DNNs. The proposed ℓp,q,r regularization can achieve both neuron-level and connection-level sparsity. And we give the analytical solutions for some specific (p,q, r) thus the compression can be achieved throughout the standard optimization process. We perform extensive experiments to illustrate the advantages and characteristics of our new method for further applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call