Abstract
Recently, tensor decomposition approaches are used to compress deep convolutional neural networks (CNN) for getting a faster CNN with fewer parameters. However, there are two problems of tensor decomposition based CNN compression approaches, one is that they usually decompose CNN layer by layer, ignoring the correlation between layers, the other is that training and compressing a CNN is separated, easily leading to local optimum of ranks. In this paper, Learning Tucker Compression (LTC) is proposed. It gets the best tucker ranks by jointly optimizing of CNN's loss function and Tucker's cost function, which means that training and compressing is carried out at the same time. It can directly optimize the CNN without decomposing the whole network layer by layer and can directly fine-tune the whole network without using fixed parameters. LTC is verified on two public datasets. Experiments show that LTC can make a network like ResNet, VGG faster with nearly the same classification accuracy, which surpasses current tensor decomposition approaches.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.