Abstract

Abstract Introduction Wearable, multisensory consumer devices that estimate sleep are prevalent and hold great potential. Most validated actigraphic prediction studies of sleep stages (SS) have only used low resolution (30 sec) data and the Cole-Kripke algorithm. Other algorithms are often proprietary and not accessible or validated. We present an automatic, data-driven deep learning algorithm that process raw actigraphy (ACC) and photoplethysmography (PPG) using a low-cost consumer device at high (25Hz) and low resolution to predict SS and to detect sleep disordered breathing (SDB) events. Methods Our automatic, data-driven algorithm is a deep neural network trained and evaluated to predict SS and SDB events on 236 recordings of ACC data from a wrist-worn accelerometer and PPG data from the overlapping PSG. The network was tested on raw ACC and PPG data, which was collected at 25 Hz using the HUAMI Arc2 wristband from 39 participants that underwent a nocturnal polysomnography (PSG). Results Overall accuracy (Acc), recall (Re), specificity (Sp), and kappa (κ) per subject on the test dataset the prediction of wake, NREM, REM was Acc=76.6%, Re=72.4%, Sp=78.0%, kappa=0.42. On average, we found a 7 % higher performance using the raw sensor data as input instead of processed, low resolution inputs. PPG was especially useful for REM detection. The network assigned 55.6% of patients to the correct SDB severity group when using an apnea-hypopnea index above 15. Conclusion Current results show that SS prediction is significantly improved when using the raw sensor data; it indicates that the system holds promise as a potential pervasive monitoring device for patients with chronic sleep disorders. In contrast the system did not show potential as a sleep apnea screening tool. Additional studies are ongoing to examine the effects of pathology such as sleep apnea and periodic leg movement on SS prediction. Support Technical University of Denmark; University of Copenhagen, Copenhagen Center for Health Technology, Klarman Family Foundation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call