Abstract

Neural Networks are subject to malicious data poisoning attacks affecting the ability of the model to make accurate predictions. The attacks are generated using adversarial techniques imperceptible to the human eye since they use minimal noise to alter features which end up affecting boundary decisions of the prediction Model. Predicting lithium-ion batteries' State of Health (SOH) in a adversarial context becomes a challenging task especially if the model is expected to always predict at a very high accuracy level. Our article presents three novel contributions. The first contribution is a SOH prediction model that shows one of the best accuracy rates in the literature ( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$R2=99.82\%$</tex-math></inline-formula> ) and yet uses the simplest LSTM model configuration compared to literature. The second contribution is the implementation of three state-of-the-art adversarial data poisoning attacks at decision time, knowingly Fast Gradient Method (FGSM), Momentum Iterative Method (MIM) and Basic Iterative Method (BIM) and the assessment of their impact on the original prediction accuracy. Most of literature use the attacks in a classification context while we are applying it to a time series prediction context. The third and most important contribution of this article is presenting a generic defense strategy combined with a feature engineering method that can be genrelized to prevent any potential adversarial attack attempt on any prediction model in a time-series prediction context using the same suggested feature engineering approach as suggested in this paper. The accuracy of the model is assessed using error estimators Root Mean Square Error (RMSE), Mean Absolute Error (MAE) and the R square (R2).The results show that adversarial data poisoning attacks are lethal to a time-series prediction model and our proposed defense strategy is able to detect and flag the existence of malicious data using a Simple Vector Machine (SVM) classifier with a very high confidence rate ( <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"><tex-math notation="LaTeX">$area\,uder\,curve=0.996$</tex-math></inline-formula> ) which allow our model to defend against any potential unseen adversarial attack.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call