Abstract

Generative adversarial networks (GANs) have recently shown high success in applications such as image and time- series classification. However, those applications are vulnerable to complex hacking scenarios, for example, inference and data poisoning attacks, which would alter or infer sensitive information about systems and users. Protecting Electroencephalographic (EEG) brain signals against illegal disclosure has a great interest these days. In this paper, we propose a privacy-preserving GAN method to generate and classify EEG data effectively. Generating EEG data offers a range of capabilities, including sharing experimental data without infringing user privacy, improving machine learning models for brain-computer interface tasks and restore corrupted data. The proposed GAN model is trained under a differential privacy model to enhance the data privacy level by limiting queries of data from artificial trials that could identify the real participants from their EEG signals. The performance of the proposed method was evaluated using a motor imagery classification task, where real EEG data are augmented with artificially generated samples for training machine learning classifiers. The evaluation was performed on a benchmark EEG data set for nine subjects. The experimental outcomes revealed that the non-private version of the proposed approach could produce high-quality data that significantly improve the motor imagery classification performance. The private version showed lower but comparable performance to the standard models trained on real data only.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call