With the increasing use of deep learning on data collected by non-perfect sensors and in non-perfect environments, the robustness of deep learning systems has become an important issue. A common approach for obtaining robustness to noise has been to train deep learning systems with data augmented with Gaussian noise. In this work, the common choice of Gaussian noise is challenged and the possibility of stronger robustness for non-Gaussian impulsive noise is explored, specifically alpha-stable noise. Justified by the Generalized Central Limit Theorem and evidenced by observations in various application areas, alpha-stable noise is widely present in nature. By comparing the testing accuracy of models trained with Gaussian noise and alpha-stable noise on data corrupted by different noise, it is found that training with alpha-stable noise is more effective than Gaussian noise, especially when the dataset is corrupted by impulsive noise, thus improving the robustness of the model. Moreover, in the testing on the common corruption benchmark dataset, training with alpha-stable noise also achieves promising results, improving the robustness of the model to other corruption types and demonstrating comparable performance with other state-of-the-art data augmentation methods. Consequently, a novel data augmentation method is proposed that replaces Gaussian noise, which is typically added to the training data, with alpha-stable noise.
Read full abstract