Due to the changeable, high-dimensional, non-stationary, and other characteristics of electroencephalography (EEG) signals, the recognition of EEG signals is mostly limited to independent individuals. To deal with these issues, we propose a multi-dimensional graph convolution network (MD-GCN), which integrates EEG signals’ temporal and spatial characteristics and can classify emotions more accurately. First, we use that the asymmetry of neuron activity in the left and right hemispheres is very important for emotion prediction to initialize the adjacency matrix and perform preliminary edge prediction without considering node features. Then, we perform the feature fusion on the Inception network and then input it into the graph convolution network to learn the interrelationship between channels. Finally, we visually analyze the adjacency matrix. To evaluate the performance of the model, we conduct experiments on the SEED dataset and the SEED-IV dataset. The results show that the pre-defined adjacency matrix method can improve the accuracy of emotion recognition and the graph convolution has better performance than the same type of convolution. It also theoretically shows that the emotional state is mainly by the interaction of important brain regions.