The statistical variability of electroencephalographic (EEG) signals across individuals poses a common challenge for brain–computer interfaces (BCI). Specifically, the reuse of pre-recorded data from previous subjects presents a significant obstacle in decoding new subjects. To address these challenges, this paper introduces EEGTransferNet, an end-to-end modular transfer learning framework that enhances the similarity of statistical distributions across subjects. It utilizes convolutional neural networks as backbones for feature extraction and incorporates statistical distribution alignment and domain adaptation in different network layers to learn general features and domain-specific features respectively. The robust generalization design of EEGTransferNet enables it to handle multiple BCI paradigms using a single network making it efficient and effective. Its modular structure facilitates customization of backbone and transfer modules for diverse tasks. The optimal structure of EEGTransferNet was determined through analysis and extensive experimentation. We validated it on motor imagery (MI), error-related negativity (ERN), and rapid serial visual presentation (RSVP) datasets, which exhibits competitive performance compared to state-of-the-art methods. The experimental results confirm the effectiveness and superiority of EEGTransferNet.