Abstract

this article, we propose Sensitivity Minimization Learning (SML) to overcome the performance degradation problem caused by features corruption at the testing phase by using the stochastic sensitivity measure (STSM) as a regularizer. The STSM measures output deviations between each training sample and its noisy versions generated by a feature perturbation strategy. The feature perturbation strategy is user defined to simulate noises that our model target to defense at the testing phase. The SML is computational efficiency in training and testing phase and minimizes the generalization error on both training samples and its noise version set with feature perturbations. Models regularized by the STSM can be trained by the stochastic gradient descent algorithm efficiently and applied to very large scale applications. Experiment results on eight grayscale image, one color image, and two face databases show that the SML significantly outperforms several regularization techniques and yields much lower classification error when testing sets are contaminated with noises.

Highlights

  • The goal of neural network training is to generalize future testing samples well

  • EXPERIMENTAL RESULTS Experiment results of the Sensitivity Minimization Learning (SML) with random feature corruption feature perturbation (SMLRFC ) and random normal distribution noise feature perturbation(SMLRND) are compared with the weight decay (WD), the noise injection (NI), and the random dropout in three classification tasks: 1) gray image classification (8 grayscale image datasets); 2) natural image classification based on Bag of Visual Word (BoW) features, and 3) face recognition on 2 face databases

  • We propose a general regularization method based on output stochastic sensitivity measure (STSM) in this article

Read more

Summary

INTRODUCTION

The goal of neural network training is to generalize future testing samples well. due to the limited amount of training samples or heavily noise-contaminated training samples, the learned model over-fit to training data . The commonly used regularization methods may fail To overcome this problem, we propose a general robust model learning method based on the minimization of the output stochastic sensitivity measure (STSM) in this article. Ng: Robust Neural Networks Learning via a Minimization of Stochastic Output Sensitivity learned model learns the training data well and is robust to feature perturbations (i.e. noise). These two terms are traded-off by a regularization parameter. The STSM can be approximated by a set of finite noise versions of training samples and can be optimized by the stochastic gradient descend algorithm efficiently This makes the SML readily applicable for very large models and datasets.

RELATED WORKS
PROPOSED SML
SML TRAINING AND ITS TIME COMPLEXITY
EXPERIMENTAL RESULTS
CONCLUSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call