Abstract
Electroencephalography (EEG) is a unique and valuable ancillary examination, which is essential for the diagnosis and analysis of various neurological diseases. Deep learning methods have been demonstrated to be very promising for challenging EEG screening tasks. However, most of them mainly focus on improving the representation ability by increasing the network depth and width, and cannot take full advantage of the rich hierarchical feature information. More importantly, the importance of utilizing the spatial information between features in a network model is ignored. To this end, a novel multi-level feature fusion capsule network (MFF-CapsNet) is proposed for accurate and efficient EEG pathology detection. Firstly, we devise a new lightweight feature fusion module as a basic unit of feature extraction, through concatenating different level feature maps densely. And then, the innovative CapsNet structure enables it to well capture the significant spatial relationship among different features. Especially, compared with the original CapsNet, we design two particular layers in this structure to reduce the number of parameters, memory, as well as computation, so that the probability of overfitting can be lessened. The experimental outcomes show that MFF-CapsNet can effectively differentiate EEG recordings as pathological or healthy, and its performance is better than the current advanced methodologies, which is the first to meet the requirement of clinical application.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have