Abstract
Deep neural networks become more popular as its ability to solve very complex pattern recognition problems. However, deep neural networks often need massive computational and memory resources, which is main reason resulting them to be difficult efficiently and entirely running on embedded platforms. This work addresses this problem by saving the computational and memory requirements of deep neural networks by proposing a variance reduced (VR)-based optimization with regularization techniques to compress the requirements of memory of models within fast training process. It is shown theoretically and experimentally that sparsity-inducing regularization can be effectively worked with the VR-based optimization whereby in the optimizer the behaviors of the stochastic element is controlled by a hyper-parameter to solve non-convex problems.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal on Artificial Intelligence Tools
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.