Abstract

Electroencephalogram (EEG) signal is often used to assess sleep quality and treat sleep disorders. Many existing methods usually obtain high accuracy through a large number of feature preprocessing and feature extraction of EEG signals, which need a lot of prior knowledge as the basis. In this paper, a novel sleep stage classification framework, named FRL&S2SL, is proposed. The framework combines fast representation learning (FRL) and semantic-to-signal learning (S2SL) and uses single-channel EEG without any preprocessing of EEG signals. In the proposed framework, we utilize convolutional neural networks (CNN) to extract time-invariant features and bidirectional long short-term memory (BiLSTM) models to extract temporal features. Furthermore, auxiliary classifier generative adversarial network (ACGAN) is used to embed semantic features into signal features and to extract knowledge domain features of EEG signals for the first time. According to the American Academy of Sleep Medicine (AASM), sleep is divided into five stages: awake, rapid eye movement (REM) and three non-rapid eye movement (N1/N2/N3). We evaluated our framework using single-channel EEG (Fpz-Oz) from Sleep-EDF dataset, which is subject to the standards specified by AASM. The results show that our framework has achieved state-of-the-art in many evaluation metrics.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.