Abstract

Deep neural networks (DNNs) have successfully classified EEG-based brain-computer interface (BCI) systems. However, recent studies have found that well-designed input samples, known as adversarial examples, can easily fool well-performed deep neural networks model with minor perturbations undetectable by a human. This paper proposes an efficient generative model named generative perturbation network (GPN), which can generate universal adversarial examples with the same architecture for non-targeted and targeted attacks. Furthermore, the proposed model can be efficiently extended to conditionally or simultaneously generate perturbations for various targets and victim models. Our experimental evaluation demonstrates that perturbations generated by the proposed model outperform previous approaches for crafting signal-agnostic perturbations. We demonstrate that the extended network for signal-specific methods also significantly reduces generation time while performing similarly. The transferability across classification networks of the proposed method is superior to the other methods, which shows our perturbations' high level of generality. Our code is available for download on https://github.com/AIRLABkhu/Generative-Perturbation-Networks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.