Abstract

Demand forecasting (DF) plays an essential role in supply chain management, as it provides an estimate of the goods that customers are expected to purchase in the foreseeable future. While machine learning techniques are widely used for building DF models, they also become more susceptible to data poisoning attacks. In this article, we study the vulnerability of targeted poisoning attacks for linear regression DF models, where the attacker controls the behavior of forecasting models on a specific target sample without compromising the overall forecasting performance. We devise a gradient-optimization framework for targeted regression poisoning in white-box settings, and further design a regression value manipulation strategy for targeted poisoning in black-box settings. We also discuss some possible countermeasures to defend against our attacks. Extensive experiments are conducted on two real-world datasets with four linear regression models. The results demonstrate that our attacks are very effective, and can achieve a high prediction deviation with control of less than 1% of the training samples.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call