Abstract

The temporal context information between sleep stage sequence contains sleep transition rules, which is important for improving sleep staging performance. Existing multi-task learning methods reconstruct EEG signals from one sleep stage, ignoring the importance of sequential temporal context in capturing long-term dependencies to enhance representation learning. To address these issues, we propose a multi-task deep learning model to jointly reconstruct sequence signal and segment time series. The model enhances the ability of time series segmentation task to capture sequential temporal context and improves the performance of single-channel EEG by optimizing the common encoder for sequence signal reconstruction task. In addition, we design a one-dimensional channel attention module to enhance the feature representation extracted for sleep sequence signal. The experimental results on four datasets show that the multi-task deep learning model can improve the generalization using sequence signal reconstruction. Compared with other state-of-the-art methods, the method proposed in this study obtained competitive performance in terms of metrics such as accuracy, which is 85.6% on the 2013 version of Sleep-EDF Database Expanded, 83.4% on the 2018 version of Sleep-EDF Database Expanded, 85.6% on Sleep Heart Health Study, and 77.4% on CAP Sleep Database.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call