As an important kind of DNN (deep neural network), CNN (convolutional neural network) has made remarkable progress and been widely used in the vision and decision-making of autonomous robots. Nonetheless, in many scenarios, even a minor perturbation in input for CNNs may lead to serious errors, which means CNNs lack robustness. Formal verification is an effective method to guarantee the robustness of CNNs. Existing works predominantly concentrate on local robustness verification, which requires considerable time and space. Probabilistic robustness quantifies the robustness of CNNs, which is a practical mode of potential measurement. The state-of-the-art of probabilistic robustness verification is a test-driven approach, which is used to manually decide whether a DNN satisfies the probabilistic robustness and does not involve robustness repair. Robustness repair can improve the robustness of CNNs further. To address this issue, we propose a probabilistic model checking-driven robustness guarantee framework for CNNs, i.e., PRG4CNN. This is the first automated and complete framework for guaranteeing the probabilistic robustness of CNNs. It comprises four steps, as follows: (1) modeling a CNN as an MDP (Markov decision processes) by model learning, (2) specifying the probabilistic robustness of the CNN via the PCTL (Probabilistic Computational Tree Logic) formula, (3) verifying the probabilistic robustness with a probabilistic model checker, and (4) probabilistic robustness repair by counterexample-guided sensitivity analysis, if probabilistic robustness does not hold on the CNN. We here conduct experiments on various scales of CNNs trained on the handwriting dataset MNIST, and demonstrate the effectiveness of PRG4CNN.
Read full abstract