Abstract

Sleep stage classification is of great importance in sleep analysis, which provides information for the diagnosis and monitoring of sleep-related conditions. To accurately analyze sleep structure under comfortable conditions, many studies have applied deep learning to sleep staging based on single-lead electrocardiograms (ECGs). However, there is still great room for improvement in inter-subject classification. In this paper, we propose an end-to-end, multi-scale, subject-adaptive network that improves the performance of the model according to the model architecture, training method, and loss calculation. In our investigation, a multi-scale residual feature encoder extracted various details to support the feature extraction of single-lead ECGs in different situations. After taking the domain shift caused by individual differences and acquisition conditions into consideration, we introduced a domain-aligning layer to confuse the domain. Moreover, to enhance the performance of the model, the multi-class focal loss was used to reduce the negative impact of class imbalance on the learning of the model, and the loss of sequence prediction was added to the classification task to assist the model in judging sleep stages. The model was evaluated on the public test datasets SHHS2, SHHS1, and MESA, and we obtained mean accuracies (Kappa) of 0.849 (0.837), 0.827 (0.790), and 0.868 (0.840) for awake/light sleep/deep sleep/REM stage classification, which confirms that this is an improved solution compared to the baseline. The model also performed outstandingly in cross-dataset testing. Hence, this article makes valuable contributions toward improving the reliability of sleep staging.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call