Abstract

In the past years, deep neural networks (DNN) have become popular in many disciplines such as computer vision (CV), natural language processing (NLP), etc. The evolution of hardware has helped researchers to develop many powerful Deep Learning (DL) models to face numerous challenging problems. One of the most important challenges in the CV area is Medical Image Analysis in which DL models process medical images—such as magnetic resonance imaging (MRI), X-ray, computed tomography (CT), etc.—using convolutional neural networks (CNN) for diagnosis or detection of several diseases. The proper function of these models can significantly upgrade the health systems. However, recent studies have shown that CNN models are vulnerable under adversarial attacks with imperceptible perturbations. In this paper, we summarize existing methods for adversarial attacks, detections and defenses on medical imaging. Finally, we show that many attacks, which are undetectable by the human eye, can degrade the performance of the models, significantly. Nevertheless, some effective defense and attack detection methods keep the models safe to an extent. We end with a discussion on the current state-of-the-art and future challenges.

Highlights

  • Deep learning provides researchers, powerful models evolving science and technology.Convolutional neural networks (CNNs) are the most important type of Deep Learning (DL) models for image processing and analysis, as they are very effective in learning meaningful features

  • Convolutional neural networks (CNNs) are the most important type of DL models for image processing and analysis, as they are very effective in learning meaningful features

  • Deep learning has dramatically improved medical image analysis and it has become a crucial tool for doctors and hospitals

Read more

Summary

Introduction

Convolutional neural networks (CNNs) are the most important type of DL models for image processing and analysis, as they are very effective in learning meaningful features. DL has become a useful supportive tool for doctors through medical image analysis as it saves significant time from doctors’ tasks. Deep learning has a very high performance on vision tasks, some recent studies proved that it can be vulnerable to adversarial attacks [6] and stealth attacks [7]. Ilyas et al [11] claimed that the success of adversarial attacks is due to models’ abilities to generalize on a specific dataset and non-robust features

Methods
Findings
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call