Abstract
Machine Learning models are susceptible to attacks, such as noise, privacy invasion, replay, false data injection, and evasion attacks, which affect their reliability and trustworthiness. Evasion attacks, performed to probe and identify potential ML-trained models’ vulnerabilities, and poisoning attacks, performed to obtain skewed models whose behavior could be driven when specific inputs are submitted, represent a severe and open issue to face in order to assure security and reliability to critical domains and systems that rely on ML-based or other AI solutions, such as healthcare and justice, for example. In this study, we aimed to perform a comprehensive analysis of the sensitivity of Artificial Intelligence approaches to corrupted data in order to evaluate their reliability and resilience. These systems need to be able to understand what is wrong, figure out how to overcome the resulting problems, and then leverage what they have learned to overcome those challenges and improve their robustness. The main research goal pursued was the evaluation of the sensitivity and responsiveness of Artificial Intelligence algorithms to poisoned signals by comparing several models solicited with both trusted and corrupted data. A case study from the healthcare domain was provided to support the pursued analyses. The results achieved with the experimental campaign were evaluated in terms of accuracy, specificity, sensitivity, F1-score, and ROC area.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.