Abstract

As Machine Learning (ML) models are increasingly employed in a number of applications across a multitude of fields, the threat of adversarial attacks against ML models is also increasing. Adversarial samples crafted via specialized attack algorithms have been shown to significantly decrease the performance of ML models. Furthermore, it has also been found that adversarial samples generated for a particular model can transfer across other models, and decrease accuracy and other performance metrics for a model they were not originally crafted for. In recent research, many different defense approaches have been proposed for making ML models robust, ranging from adversarial input re-training to defensive distillation, among others. While these approaches operate at the model-level, we propose an alternate approach to defending ML models against adversarial attacks, using Moving Target Defense (MTD). We formulate the problem and provide preliminary results to showcase the validity of the proposed approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call