Abstract

The aim of this study was to develop a sleep staging classification model capable of accurately performing on different wearable devices. Twenty-three healthy subjects underwent a full-night type I polysomnography and used two devices' combinations: (A) flexible single-channel electroencephalogram headband+actigraphy (N=12) and (B) rigid single-channel electroencephalogram headband+actigraphy (N=11). The signals were segmented into 30-second epochs according to polysomnographic stages (scored by a board-certified sleep technologist) (model ground truth) and 18 frequency and time features were extracted. The model consisted of an ensemble of bagged decision trees. Bagging refers to bootstrap aggregation to reduce overfitting and improve generalization. To evaluate the model, a training dataset under 5-fold cross-validation and an 80-20% dataset split was used. The headbands were also evaluated without the actigraphy feature. Subjects also completed a usability evaluation (comfort, pain while sleeping, and sleep disturbance). Combination A had an F1-score of 98.4% and the flexible headband alone of 97.7% (error rate for N1: combination A=9.8%; flexible headband alone=15.7%). Combination B had an F1-score of 96.9% and the rigid headband alone of 95.3% (error rate for N1: combination B=17.0%; rigid headband alone=27.7%); in both, N1 was more confounded with N2. We developed an accurate sleep classification model based on a single-channel EEG device, and actigraphy was not an important feature of the model. Both headbands were found to be useful, with the rigid one being more disruptive to sleep. Future research can improve our results by applying the developed model in a population with sleep disorders. Actigraphy, Wearable EEG Band and Smartphone for Sleep Staging (ID NCT04943562).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call