Abstract

Deep learning has been utilized in the fields of image processing, natural language processing and speech recognition. For improving the structure of deep learning, how to compress Convolutional Neural Networks has become a major focus topic. The study proposes a sparsification-based Convolutional Neural Networks hybrid multi-objective evolutionary model compression method based on hybrid multi-objective evolution, in which the process improves the sparsity of the model by sparse pruning and introduces a time-domain residual module to provide more robust generalization performance, and finally the function of the research method is tested. The test outcomes indicate that the Convolutional Neural Networks model generated by the research method drops to the lowest value in the first 10 iterations; in the sparsity test, the total weight density of the model generated by the research method is about 36%; when the latency test is conducted, the latency of the research method is between 3 ms and 14 ms when the weight sparsity is 60%. The above results show that the research method has better model compression effect and can generate Convolutional Neural Networks models with both model accuracy and model size, which can provide new technical support for model compression of convolutional neural networks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.