Abstract

Sleep stage classification constitutes an important element of sleep disorder diagnosis. It relies on the visual inspection of polysomnography records by trained sleep technologists. Automated approaches have been designed to alleviate this resource-intensive task. However, such approaches are usually compared to a single human scorer annotation despite an inter-rater agreement of about 85% only. The present study introduces two publicly-available datasets, DOD-H including 25 healthy volunteers and DOD-O including 55 patients suffering from obstructive sleep apnea (OSA). Both datasets have been scored by 5 sleep technologists from different sleep centers. We developed a framework to compare automated approaches to a consensus of multiple human scorers. Using this framework, we benchmarked and compared the main literature approaches to a new deep learning method, SimpleSleepNet, which reach state-of-the-art performances while being more lightweight. We demonstrated that many methods can reach human-level performance on both datasets. SimpleSleepNet achieved an F1 of 89.9% vs 86.8% on average for human scorers on DOD-H, and an F1 of 88.3% vs 84.8% on DOD-O. Our study highlights that state-of-the-art automated sleep staging outperforms human scorers performance for healthy volunteers and patients suffering from OSA. Considerations could be made to use automated approaches in the clinical setting.

Highlights

  • S LEEP has a crucial impact in human health

  • We introduced two open multi-scored sleep staging datasets with 25 from healthy subjects and 55 nights patients suffering from obstructive sleep apnea (OSA)

  • We proposed a methodology for evaluation against multiple human scorers

Read more

Summary

Introduction

S LEEP has a crucial impact in human health. Sleep disorders are a common public health issue. In the US, studies have shown that millions of people are affected [1]. Polysomnography (PSG) is the gold standard for the diagnosis of highly prevalent sleep disorders such as obstructive. Manuscript received October 31, 2019; revised March 2, 2020, April 27, 2020, and June 26, 2020; accepted July 5, 2020. Date of publication July 22, 2020; date of current version September 7, 2020.

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call