The integration of electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) can facilitate the advancement of brain-computer interfaces (BCIs). However, existing research in this domain has grappled with the challenge of the efficient selection of features, resulting in the underutilization of the temporal richness of EEG and the spatial specificity of fNIRS data.To effectively address this challenge, this study proposed a deep learning architecture called the multimodal DenseNet fusion (MDNF) model that was trained on two-dimensional (2D) EEG data images, leveraging advanced feature extraction techniques. The model transformed EEG data into 2D images using a short-time Fourier transform, applied transfer learning to extract discriminative features, and consequently integrated them with fNIRS-derived spectral entropy features. This approach aimed to bridge existing gaps in EEG-fNIRS-based BCI research by enhancing classification accuracy and versatility across various cognitive and motor imagery tasks.Experimental results on two public datasets demonstrated the superiority of our model over existing state-of-the-art methods.Thus, the high accuracy and precise feature utilization of the MDNF model demonstrates the potential in clinical applications for neurodiagnostics and rehabilitation, thereby paving the method for patient-specific therapeutic strategies.
Read full abstract