Abstract
At the core of every artificial neural network, lie neurons which are approximate models of their biological counterparts. However, networks that are made of artificial neurons can become unreliable in the face of adversarial machine learning attacks. We traced this issue back to its roots and created a reliable artificial neuron (called binomial neuron) that can resist adversarial attacks such as DeepFool, Fast Gradient Sign Method (FGSM), Pixel, Square and Carlini and Wagner's (C&W) better. We propose to replace the input value of an artificial neuron with the expectation of a Bernoulli random variable. This random variable is made up of the spikes fired by the previous-layer binomial neuron upon crossing a randomly generated threshold. Depending on how long the neuron waits to aggregate the spikes and find the expectation, different model variants can be created. We conducted extensive experiments to evaluate the reliability of this new neuron. The results confirm the increase in the robustness/accuracy under adversarial attacks, or equivalently, the substantial increase in the amount of distortion attackers need to add to images in order to create successful adversarial samples. These distortions pass the perceptibility range to the extent that approach total image destruction.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have