Abstract

Supervised automatic sleep scoring algorithms are usually trained using sleep stage labels manually annotated on 30s epochs of PSG data. In this study, we investigate the impact of using shorter epochs with various PSG input signals for training and testing a Long Short Term Memory (LSTM) neural network. An LSTM model is evaluated on the provided 30s epoch sleep stage labels from a publicly available dataset, as well as on 10s subdivisions. Additionally, three independent scorers re-labeled a subset of the dataset on shorter time windows. The automatic sleep scoring experiments were repeated on the re-annotated subset.The highest performance is achieved on features extracted from 30s epochs of a single channel frontal EEG. The resulting accuracy, precision and recall were of 92.22%, 67.58% and 66.00% respectively. When using a shorter epoch as input, the performance decreased by approximately 20%. Re-annotating a subset of the dataset on shorter time epochs did not improve the results and further altered the sleep stage detection performance. Our results show that our feature-based LSTM classification algorithm performs better on 30s PSG epochs when compared to 10s epochs used as input. Future work could be oriented to determining whether varying the epoch size improves classification outcomes for different types of classification algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call