Abstract

Regularization is a crucial technology to improve the generalization of deep neural networks. However, traditional regularization method approaches are all scenario-specific, because they are generally with ingeniously designed feature representations from input layer, hidden layer, and output layer, which increase the difficulty of model development and interpretation. To this end, a novel practical and flexible regularization method is presented to obtain higher generalization and interpretability. Specifically, the feature maps are decoupled by global suppression and partial suppression from various scales and locate the salient feature with strong low-resolution semantic information. Moreover, the guided discarding specification for feature decoupling by measuring the feature contributions to network decisions, leads to the logics with better interpretability. Subsequently, the max values of the feature map are suppressed by discarding the corresponding salient features. Comprehensive experiments demonstrate that the proposed adaptive regularization outperforms the state-of-the-art performance in image classification accuracy, generalization, and interpretability on several widely used datasets. And adaptive regularization helps the network to mine the connection between salient features, nonsalient features, and ground truth, encouraging the network to construct multiple layers of feature associations.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.