Abstract

Recurrent neural networks are widely used in machine learning systems that process time series data including health monitoring, object tracking in video, and automatic speech recognition (ASR). While much work has been done demonstrating the vulnerability of deep neural networks to socalled adversarial perturbations, the majority of this work has focused on convolutional neural networks that process non-sequential data for tasks like image recognition. We propose that the unique memory and parameter sharing properties of recurrent neural networks make them susceptible to periodic adversarial perturbations that can exploit these unique features. In this paper, we demonstrate a general application of deep reinforcement learning to the generation of periodic adversarial perturbations in a black-box approach to attack recurrent neural networks processing sequential data. We successfully learn an attack policy to generate adversarial perturbations against the DeepSpeech ASR system and further demonstrate that this attack policy generalizes to a set of unseen examples in real time.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.