Abstract
The expressive power of deep neural networks has enabled us to successfully tackle several modeling problems in computer vision, natural language processing, and financial forecasting in the last few years. Nowadays, neural networks achieving state-of-the-art (SoTA) performance in any field can be formed by hundreds of layers with millions of parameters. While achieving impressive performances, it is often required several days with high-end hardware in order to optimize a single SoTA neural network. But more importantly, it took several years of experiments for the community to gradually discover more and more efficient neural network architectures, going from VGGNet to ResNet, then DenseNet. In addition to the expensive and time-consuming experimentation process, SoTA neural networks, which require powerful processors to run, cannot be easily deployed to mobile or embedded devices. For these reasons, improving the training and deployment efficiency of deep neural networks has become an important area of research in the deep learning community. In this chapter, we will cover two topics, namely progressive neural network learning and compressive learning, which have been extensively developed recently to enhance the training and deployment of deep models.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.