Abstract

It has been proved that applying sparsity regularization in deep learning networks is an efficient approach. Researchers have developed several algorithms to control the sparseness of activation probability of hidden units. However, each of them has inherent limitations. In this paper, we firstly analyze weaknesses and strengths for popular sparsity algorithms, and categorize them into two groups: local and global sparsity. \( L_{1/2} \) regularization is first time introduced as a global sparsity method for deep learning networks. Secondly, a combined solution is proposed to integrate local and global sparsity methods. Thirdly we customize proposed solution to fit in two deep learning networks: deep belief network (DBN) and generative adversarial network (GAN), and then test on benchmark datasets MNIST and CelebA. Experimental results show that our method outperforms existing sparsity algorithm on digits recognition, and achieves a better performance on human face generation. Additionally, proposed method could also stabilize GAN loss changes and eliminate noises.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.