Abstract
The most important part of sleep quality assessment is the classification of sleep stages, which helps to diagnose sleep-related disease. In the traditional sleep staging method, subjects have to spend a night in the sleep clinic for recording polysomnogram. Sleep expert classifies the sleep stages by monitoring the signals, which is time consuming and frustrating task and can be affected by human error. New studies propose fully automated techniques for classifying sleep stages that makes sleep scoring possible at home. Despite comprehensive studies have been presented in this field the classification results have not yet reached the gold standard due to the concentration on the use of a limited source of information such as single channel EEG. Therefore, this article introduces a new method for fusing two sources of information, including electroencephalogram (EEG) and electrooculogram (EOG), to achieve promising results in the classification of sleep stages. In the proposed method, extracted features from the EEG and EOG signals, are divided into two feature sets consisting of the EEG features and fused features of EEG and EOG. Then, each feature set transformed into a horizontal visibility graph (HVG). The images of the HVG are produced in a novel framework and classified by proposed transfer learning convolutional neural network for data fusion (TLCNN-DF). Employing transfer learning at the training stage of the model has accelerated the training process of the CNN and improved the performance of the model. The proposed algorithm is used to classify the Sleep-EDF and Sleep-EDFx benchmark datasets. The algorithm can classify the Sleep-EDF dataset with an accuracy of 93.58% and Cohen’s kappa coefficient of 0.899. The results show proposed method can achieve superior performance compared to state-of-the-art studies on classification of sleep stages. Furthermore, it can attain reliable results as an alternative to conventional sleep staging.
Highlights
We spend one-third of our lives in sleep
By using transfer learning for training the model, TLCNN-DF is correctly recognized 60% of N1 stage, and the false recognition is reduced to 10.4%, 14.8% and 13.9% for Wake, N2 and Rapid Eye Movements (REM) stages, respectively
In this paper, a novel algorithm is presented for sleep stage classification using EEG and EOG
Summary
We spend one-third of our lives in sleep. Any abnormalities in the sleep cycle may cause serious problems such as, extreme fatigue, lack of concentration or metabolic problems such as diabetes and obesity [1], [2]. The physician manually assigns a specific sleep stage to every 30-sec epoch of signals. Sleep stages include Wake (W), Rapid Eye Movements (REM) and Non-Rapid Eye Movement (NREM) stages.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.