In recent times, deep convolutional neural networks became an irreplaceable tool for pattern recognition in many different machine learning applications, especially in image classification. On the other hand, these models are often used in critical systems which are the reason for new and recent research regarding their robustness and reliability. One of the most important issues for these models is their susceptibility to different adversarial attacks. In our previous work Milošević and Racković (Neural Network World. 2019;29(4):221–34), and Milošević and Racković (Neural Comput Applic. 2021;33:7593–602), the new type of learning applicable to all the convolutional neural networks was introduced: the classification based on the negative features and the synergy of traditional and those newly introduced network models. In the case of partial inputs/image occlusion, it was shown that our new method creates models that are more robust and perform better when compared to traditional models of the same architecture. In this paper, some extensions of the earlier proposed synergy are given by introducing negatively trained features and additional synergy between four independent neural network models. A detailed analysis of the robustness of the newly proposed model is performed on EMNIST and CIFAR-10 image classification data sets in the case of the selected input occlusions and adversarial attacks. The newly proposed neural network architecture improves the robustness of the neural network and increases its resistance to various types of input damage and adversarial attacks.
Read full abstract