Abstract

The advancement of hardware and computing power has enabled deep learning to be used in a variety of fields, particularly in AI medical applications in intelligent medicine and medical metaverse. Deep learning models are aiding in many clinical medical image analysis tasks, including fusion, registration, detection, classification and segmentation. In recent years, many deep learning-based approaches have been developed for medical image recognition, including classification and segmentation. However, these models are susceptible to adversarial samples, posing a threat to their real world application and making them unsuitable for clinical use. This paper provides an overview of adversarial attack strategies that have been proposed against medical image models and the defense methods used to protect them. We assessed the advantages and disadvantages of these strategies and compared their efficiency. We then examined the existing state and restrictions of research methods involving the adversarial attack and defense of deep learning models for medical image recognition. Additionally, several suggestions were given on how to enhance the robustness of medical image deep learning models in intelligent medicine and medical metaverse.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.