Abstract

Neural networks are widely used in image classification, but there is an overfitting problem in image classification using neural networks. Regularized sparse models can solve the problem of overfitting in modeling. Many existing regularization methods directly use L1 parametric as the regularization penalty term, but Ll regularization is mainly designed to remove redundant weights, and its performance in removing redundant nodes is not satisfactory. There are also some research works based on the Dropout regularization method to remove the nodes in the network in a completely random way, and the resulting local network lacks the discriminative property for different samples. For these problems, a sparse regularization model based on the network structure is proposed, using an optimized L1/2 parametric as the penalty term of the loss function for removing redundant weights; at the same time, sparsity and relevance restrictions are introduced to the nodes of the neural network during training, and the probability of nodes being removed is chosen according to the magnitude of both, so that the network removes nodes with low activation value and high relevance with higher probability to retain The experimental results on the public dataset show that the regularization model in this paper has better generalization ability than the traditional method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.