Abstract

Sleep is indispensable for human survival, and automatic sleep staging is crucial to evaluating sleep quality. Deep learning has become popular in recent years as a way to increase the accuracy of sleep staging. However, most methods have a large number of parameters and big computational burden that are not suitable for mobile deployment. To this end, we propose a novel lightweight model named SleepNet-Lite for sleep staging. First, a multiscale convolutional block (MSCB) is designed to extract multiscale electroencephalogram features. Second, an inverted residual block is used to fuse the multiscale features. Channel shuffle and channel split operation are inserted into the MSCB and residual block, realizing the interactive information flow to improve performance. In addition, we replace the standard convolution with depthwise separable convolution to increase representational efficiency and reduce the number of parameters. Then, the global average pooling layer is utilized before fully connected layer to further reduce the parameters and avoid overfitting. On account of class imbalance, we use the logit-adjustment loss function to adaptively pay attention to each stage in the training process without increasing the computational burden. We show that the SleepNet-Lite surpasses the state-of-the-art approaches, in overall accuracy and Cohen's Kappa on two public datasets. We achieve superior performance with fewer parameters of 41.67 K on the Seep-EDF dataset and 42.44 K on the MASS-SS3 dataset, providing a promising deployment solution for mobile platforms and wearables.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.