Abstract
In medical practice today, clinical data registries have become a powerful tool for measuring and driving quality improvement, especially among multicenter projects. Registries face the known problem of trying to create dependable and clear metrics from electronic medical records data, which are typically scattered and often based on unreliable data sources. The Society for Thoracic Surgery (STS) is one such example, and it supports manually collected data by trained clinical staff in an effort to obtain the highest-fidelity data possible. As a possible alternative, our team designed an algorithm to test the feasibility of producing computer-derived data for the case of postoperative mechanical ventilation hours. In this article, we study and compare the accuracy of algorithm-derived mechanical ventilation data with manual data extraction. We created a novel algorithm that is able to calculate mechanical ventilation duration for any postoperative patient using raw data from our EPIC electronic medical record. Utilizing nursing documentation of airway devices, documentation of lines, drains, and airways, and respiratory therapist ventilator settings, the algorithm produced results that were then validated against the STS registry. This enabled us to compare our algorithm results with data collected by human chart review. Any discrepancies were then resolved with manual calculation by a research team member. The STS registry contained a total of 439 University of California Los Angeles cardiac cases from April 1, 2013, to March 31, 2014. After excluding 201 patients for not remaining intubated, tracheostomy use, or for having 2 surgeries on the same day, 238 cases met inclusion criteria. Comparing the postoperative ventilation durations between the 2 data sources resulted in 158 (66%) ventilation durations agreeing within 1 hour, indicating a probable correct value for both sources. Among the discrepant cases, the algorithm yielded results that were exclusively correct in 75 (93.8%) cases, whereas the STS results were exclusively correct once (1.3%). The remaining 4 cases had inconclusive results after manual review because of a prolonged documentation gap between mechanical and spontaneous ventilation. In these cases, STS and algorithm results were different from one another but were both within the transition timespan. This yields an overall accuracy of 99.6% (95% confidence interval, 98.7%-100%) for the algorithm when compared with 68.5% (95% confidence interval, 62.6%-74.4%) for the STS data (P < .001). There is a significant appeal to having a computer algorithm capable of calculating metrics such as total ventilator times, especially because it is labor intensive and prone to human error. By incorporating 3 different sources into our algorithm and by using preprogrammed clinical judgment to overcome common errors with data entry, our results proved to be more comprehensive and more accurate, and they required a fraction of the computation time compared with manual review.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.