Abstract

Rapid progress and widespread outbreak of COVID-19 have caused devastating influence on the health systems all around the world. The importance of countermeasures to tackle this problem lead to widespread use of Computer Aided Diagnosis (CADs) applications using deep neural networks. The unprecedented success of machine learning techniques, especially deep learning networks in medical images, have led to their recent prominence in improving efficient diagnosis of COVID-19 with increased detection accuracy. However, recent studies in the field of security of AI-based systems revealed that these deep learning models are vulnerable to adversarial attacks. Adversarial examples generated by attack algorithms are not recognizable by the human eye and can easily deceive the state-of-the-art deep learning models, therefore they threaten security-critical learning applications. In this paper, the methodology, results and concerns of recent works on robustness of AI based COVID-19 systems are summarized and discussed. We explore important security concerns related to deep neural networks and review current state-of-the-art defense methods to prevent performance degradation.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.