Backgrounds. Virtual reality (VR) simulates real-life events and scenarios and is widely utilized in education, entertainment, and medicine. VR can be presented in two dimensions (2D) or three dimensions (3D), with 3D VR offering a more realistic and immersive experience. Previous research has shown that electroencephalogram (EEG) profiles induced by 3D VR differ from those of 2D VR in various aspects, including brain rhythm power, activation, and functional connectivity. However, studies focused on classifying EEG in 2D and 3D VR contexts remain limited.Methods. A 56-channel EEG was recorded while visual stimuli were presented in 2D and 3D VR. The recorded EEG signals were classified using two machine learning approaches: traditional machine learning and deep learning. In the traditional approach, features such as power spectral density (PSD) and common spatial patterns (CSP) were extracted, and three classifiers-support vector machines (SVM), K-nearest neighbors (KNN), and random forests (RF)-were used. For the deep learning approach, a specialized convolutional neural network, EEGNet, was employed. The classification performance of these methods was then compared.Results. In terms of accuracy, precision, recall, and F1-score, the deep learning method outperformed traditional machine learning approaches. Specifically, the classification accuracy using the EEGNet deep learning model reached up to 97.86%.Conclusions. EEGNet-based deep learning significantly outperforms conventional machine learning methods in classifying EEG signals induced by 2D and 3D VR. Given EEGNet's design for EEG-based brain-computer interfaces (BCI), this superior classification performance suggests that it can enhance the application of 3D VR in BCI systems.