Abstract
There is much interest in clinical practice to obtain an automatic tool for accurate analysis of sleep quality. Sleep staging as an important component of sleep analysis has been studied for years and many promising advances have been reported in the literature. In this paper, a dual-modal and multi-scale deep neural network system using Electroencephalogram (EEG) and Electrocardiograph (ECG) signals is proposed for sleep staging in an end-to-end way. The proposed network adopts a dual-modal (one for EEG and one for ECG) and multi-scale structure with its basic block being convolutional module. The dual-modal structure is designed to combine the merits from two different signals for a more robust sleep staging. The multi-scale structure is adopted to utilize features at different scales of EEG signals, which has been found to be very important in characterizing the sleep states. The extracted features from EEG and ECG signals are fused after several full connection operations and then inputted into a classifier for sleep staging. The performance of the proposed network on sleep staging was evaluated on the public MIT-BIH polysomnography dataset. The experimental results indicated averaged accuracy of 97.97%, 98.84%, 88.80%, and 80.40% for distinguishing between ‘deep sleep vs. light sleep’, ‘rapid eye movement stage (REM) vs. non-rapid eye movement stage (NREM)’, ‘sleep vs. wake’, and ‘wake, deep sleep, light sleep, and REM’ respectively.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.