Abstract

Abstract Introduction This study aims to retrospectively assess the performance of somnolyzer against trained scientist performance of PSG analysis, under routine interlab concordance conditions in an Australian laboratory. Methods Retrospective study (2016 – 2018). Study data included 200 epoc fragments with SWS, REM and NREM. PSG data sets (n = 36) consisted of type 1 (n = 31) and type 2 (n = 5) studies. Individual scorers were compared to a master scoreset set by consensus from two experienced sleep scientists. The automatic analysis system used was Somnolyzer 24x7. Data analysis involved; Group 1: intraclass correlations and Bland-Altman plots, Group 2: Paired T-tests. Results Human analysis was shown to outperform automatic analysis in each major metric assessed, except sleep latency. Automatic analysis performed to a similar level in 6 out of 9 of the major metrics assessed (r > 0.9), however the 95% limit of agreement range was found to larger. Automatically analysed RDI’s were more likely to be lower than the master score sets, as were arousal indices. Conclusions These findings support using caution with automatic analysis, particularly with medical interpretation and practice. Automatic analysis performance can vary dramatically between PSG data sets. This output can vary from one that is comparable to a human analysis through to a level that is well below that of human scorers, under concordance conditions. Consideration should be given to the accuracy of automatic analysis when making scientific conclusions with borderline cases.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call