Abstract

The recent rise of adversarial machine learning highlights the vulnerabilities of various systems relevant in a wide range of application domains. This paper focuses on the important domain of automatic space surveillance based on the acoustic modality. After setting up a state of the art solution using log-Mel spectrogram modeled by a convolutional neural network, we systematically investigate the following four types of adversarial attacks: a) Fast Gradient Sign, b) Projected Gradient Descent, c) Jacobian Saliency Map, and d) Carlini & Wagner <tex xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">$\ell_{\infty}$</tex> . Experimental scenarios aiming at inducing false positives or negatives are considered, while attacks' efficiency are thoroughly examined. It is shown that several attack types are able to reach high success rate levels by injecting relatively small perturbations on the original audio signals. This underlines the need of suitable and effective defense strategies, which will boost reliability in machine learning based solution.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call