The performance of a classifier in a brain-computer interface (BCI) system is highly dependent on the quality and quantity of training data. Typically, the training data are collected in a laboratory where the users perform tasks in a controlled environment. However, users' attention may be diverted in real-life BCI applications and this may decrease the performance of the classifier. To improve the robustness of the classifier, additional data can be acquired in such conditions, but it is not practical to record electroencephalogram (EEG) data over several long calibration sessions. A potentially time- and cost-efficient solution is artificial data generation. Hence, in this study, we proposed a framework based on the deep convolutional generative adversarial networks (DCGANs) for generating artificial EEG to augment the training set in order to improve the performance of a BCI classifier. To make a comparative investigation, we designed a motor task experiment with diverted and focused attention conditions. We used an end-to-end deep convolutional neural network for classification between movement intention and rest using the data from 14 subjects. The results from the leave-one subject-out (LOO) classification yielded baseline accuracies of 73.04% for diverted attention and 80.09% for focused attention without data augmentation. Using the proposed DCGANs-based framework for augmentation, the results yielded a significant improvement of 7.32% for diverted attention ( ) and 5.45% for focused attention ( ). In addition, we implemented the method on the data set IVa from BCI competition III to distinguish different motor imagery tasks. The proposed method increased the accuracy by 3.57% ( ). This study shows that using GANs for EEG augmentation can significantly improve BCI performance, especially in real-life applications, whereby users' attention may be diverted.