Abstract

As Machine Learning (ML) models are increasingly employed in a number of applications across a multitude of fields, the threat of adversarial attacks against ML models is also increasing. Adversarial samples crafted via specialized attack algorithms have been shown to significantly decrease the performance of ML models. Furthermore, it has also been found that adversarial samples generated for a particular model can transfer across other models, and decrease accuracy and other performance metrics for a model they were not originally crafted for. In recent research, many different defense approaches have been proposed for making ML models robust, ranging from adversarial input re-training to defensive distillation, among others. While these approaches operate at the model-level, we propose an alternate approach to defending ML models against adversarial attacks, using Moving Target Defense (MTD). We formulate the problem and provide preliminary results to showcase the validity of the proposed approach.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.