Abstract

SummaryAs a deep neural networks (DNNs) model compression method, learning‐compression (LC) algorithm based on pre‐trained models and matrix decomposition increases training time and ignores the structural information of models. In this manuscript, a tensor decomposition‐based direct LC (TDLC) algorithm without pre‐trained models is proposed. In TDLC, the pre‐trained model is eliminated, and tensor decomposition is first applied to LC algorithm to preserve the structural features of the model. There are two key steps in TDLC. An optimal rank selection method is first proposed in compression‐step (C‐step) of TDLC to find global optimal ranks of tensor decomposition. Second, TDLC utilizes cyclical learning rate, which is different from traditional monotonically learning rates schedule, to improve the generalization performance of uncompressed models in learning‐step (L‐step). TDLC obtains the optimal compression model by alternately optimizing L‐step and C‐step. TDLC is compared with 16 state‐of‐the‐art compression methods in experiments part. Extensive experimental results show that TDLC produces high‐accuracy compression models with high compression rate. Comparing with TDLC‐pre‐trained, TDLC notably achieves 30% training time shorten and 11% parameter reduction in Resnet32, while improving accuracy by 0.2%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call