Abstract

Sleep staging is essential for sleep assessment and plays an important role in disease diagnosis, which refers to the classification of sleep epochs into different sleep stages. Polysomnography (PSG), consisting of many different physiological signals, e.g. electroencephalogram (EEG) and electrooculogram (EOG), is a gold standard for sleep staging. Although existing studies have achieved high performance on automatic sleep staging from PSG, there are still some limitations: 1) they focus on local features but ignore global features within each sleep epoch, and 2) they ignore cross-modality context relationship between EEG and EOG. In this paper, we propose CareSleepNet, a novel hybrid deep learning network for automatic sleep staging from PSG recordings. Specifically, we first design a multi-scale Convolutional-Transformer Epoch Encoder to encode both local salient wave features and global features within each sleep epoch. Then, we devise a Cross-Modality Context Encoder based on co-attention mechanism to model cross-modality context relationship between different modalities. Next, we use a Transformer-based Sequence Encoder to capture the sequential relationship among sleep epochs. Finally, the learned feature representations are fed into an epoch-level classifier to determine the sleep stages. We collected a private sleep dataset, SSND, and use two public datasets, Sleep-EDF-153 and ISRUC to evaluate the performance of CareSleepNet. The experiment results show that our CareSleepNet achieves the state-of-the-art performance on the three datasets. Moreover, we conduct ablation studies and attention visualizations to prove the effectiveness of each module and to analyze the influence of each modality.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call