Abstract

Current deep learning methods and technologies have reached the level of deployment in softwares and hardwares for real-life applications. However, recent studies have shown deep learning architectures are highly vulnerable to attacks and exploitation via input perturbations. In this works, we investigate the effects of these attacks on the outputs of each layer of deep architectures and on theirs performance in term of classification accuracy measures. The results show that without defense mechanism, even simple attacks devastated deep architectures’ outputs in every layer and theirs classification performance. We then propose multiple defense mechanisms in order to protect deep architectures and to make them more robust to input perturbation attacks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call