Abstract

The automated analysis of sleep has grown in interest in the past decade. Advances in computing have brought the needed intensive calculations within reach [1]; while simultaneously, there is an increasing demand for sleep diagnosis and analysis. The prevalence of sleep troubles is high, and the awareness of their consequences is spreading among patients, health authorities, and clinicians. This awareness is directing more and more patients to sleep centers. The upward trend in demand for sleep evaluations concerns not only sleep specialists. Sleep appears to be an extremely promising territory for other fields, such as cardiology and nutrition for example [2]. Needs exceed capacities by this far. Data analysis has been identified as one of the bottlenecks in the sleep evaluation process, making clear the importance of developing tools to facilitate analysis. These developments have an impact that is medical, as well as economical and social. The promises of automated sleep analysis are attractive. Its advantages are that it is fast, objective, and reproducible. These qualities may improve data management and patient care. Nothing is more attractive than efficiency and quality. But there are also negative consequences. Dangers include loss of employment for technicians and loss of human expertise. Patient safety may be jeopardized as well. The central issue is, indeed, reliability. First and above all, is automated analysis safe for patients? Is automated analysis useful for clinicians in their routine care of patients? What about clinical trials and research? The controversy is intense. An idea of the entrenched positions can be found in recent issues of Sleep [3–5] and the Journal of Clinical Sleep Medicine [6]. With all the interests at stake [7], the debate needs carefully conducted studies aimed at evaluating automated software, for instance for automated detection of obstructive sleep apnea [8] or for automated sleep scoring like the one presented by Stege and colleagues in this issue of Sleep and Breathing. While the question—Does it work or not?—is simple, the answer is complicated. To answer this question, there is no simple checkbox for the yes or no. Automated analysis always fails to a degree and succeeds to a certain extent. The comfortable yes/no query should be replaced by a more relative perspective—How much does it work? More correctly put, what is the level of agreement that can and should be expected? This is the potentially controversial question raised by the work of Stege and colleagues. For automated analysis, the conventional standard is visual scoring. The AASM concentrates a significant part of its activity renewing and clarifying scoring rules and spreading their application through training and center accreditation (http://www.aasmnet.org/ISR/Default.aspx). But as intense as that effort might be, there is and always will be an unavoidable uncertainty with visual scoring. As stated by Silber and colleagues, “no visual-based scoring system will ever be perfect, as all methods are limited by the physiology of the human eye and visual cortex, individual differences in scoring experience, and the ability to detect events viewed using a 30-second epoch” [9]. Studies dealing with interrater variability have reported interscorer agreement of 82 % [10] and shown that the highest agreement, scorers can achieve under very specific conditions, is 87 % [11]. Even for intrascorer agreement, there is nothing like a 100 % agreement. The mean score-rescore agreement is 88 % [11]. Interpreting C. Berthomier (*) :M. Brandewinder Physip, 6, rue Gobert, 75011 Paris, France e-mail: C.Berthomier@physip.fr

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.