Abstract

The paper presents a method for evaluating the industrial systems with built-in artificial intelligence (AI) robustness to adversarial attacks. The influence of adversarial attacks on the systems performance has been studied. The scheme and the scenarios to implement attacks on industrial systems with built-in AI were presented. A comprehensive set of metrics used to study the robustness of ML models has been proposed, including test data set quality metrics (MDQ), ML model quality metrics (MMQ), and model robustness to adversarial attacks metrics (MSQ). The method is based on the use of this metrics set and includes the following steps: generating a set of test data containing clean samples; assessing the quality of a test data set using MMQ metrics; identification of relevant adversarial attacks methods; generating adversarial examples and a test data set, containing the adversarial samples, to evaluate the robustness of the ML model; assessing the quality of the generated adversarial test data set using MDQ indicators; evaluating the quality of a ML model using MMQ indicators; evaluating model robustness using MSQ scores.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call