Abstract

In spiking neural networks (SNN), there are emerging security threats, such as adversarial samples and poisoned data samples, which reduce the global model performance. Therefore, it is an important issue to eliminate the impact of malicious data samples on the whole model. In SNNs, a naive solution is to delete all malicious data samples and retrain the entire dataset. In the era of large models, this is impractical due to the huge computational complexity. To address this problem, we present a novel SNN model strengthening method to support fast and accurate removal of malicious data from a trained model. Specifically, we use untrained data that has the same distribution as the training data. We can infer that the untrained data has no effect on the initial model, and the malicious data should have no effect on the final refined model. Thus, we can use the model output of the untrained data with respect to the initial model to guide the final refined model. In this way, we present a stochastic gradient descent method to iteratively determine the final model. We perform a comprehensive performance evaluation on two industrial steel surface datasets. Experimental results show that our model strengthening method can provide accurate malicious data elimination, with speeds 11.7× to 27.2× faster speeds than the baseline method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call