Electroencephalogram (EEG) signals, which objectively reflect the state of the brain, are widely favored in emotion recognition research. However, the presence of cross-session and cross-subject variation in EEG signals has hindered the practical implementation of EEG-based emotion recognition technologies. In this article, we propose a multi-source domain transfer method based on subdomain adaptation and minimum class confusion (MS-SAMCC) in response to the addressed issue. First, we introduce the mix-up data augmentation technique to generate augmented samples. Next, we propose a minimum class confusion subdomain adaptation method (MCCSA) as a sub-module of the multi-source domain adaptation module. This approach enables global alignment between each source domain and the target domain, while also achieving alignment among individual subdomains within them. Additionally, we employ minimum class confusion (MCC) as a regularizer for this sub-module. We performed experiments on SEED, SEED IV, and FACED datasets. In the cross-subject experiments, our method achieved mean classification accuracies of 87.14% on SEED, 63.24% on SEED IV, and 42.07% on FACED. In the cross-session experiments, our approach obtained average classification accuracies of 94.20% on SEED and 71.66% on SEED IV. These results demonstrate that the MS-SAMCC approach proposed in this study can effectively address EEG-based emotion recognition tasks.