Abstract

BackgroundWith the emergence of the electronic health records (EHRs) as a pervasive healthcare information technology, new opportunities and challenges for use of clinical data for quality measurements arise with respect to data quality, data availability and comparability. The objective of this study is to test whether data extracted from electronic health records (EHRs) was of comparable quality as survey data for the calculation of quality indicators.MethodsData from surveys describing patient cases and filled out by physiotherapists in 2009-2010 were used to calculate scores on eight quality indicators (QIs) to measure the quality of physiotherapy care. In 2011, data was extracted directly from EHRs. The data collection methods were evaluated for comparability. EHR data was compared to survey data on completeness and correctness.ResultsFive of the eight QIs could be extracted from the EHRs. Three were omitted from the indicator set, as they proved too difficult to be extracted from the EHRs. Another QI proved incomparable due to errors in the extraction software of some of the EHRs. Three out of four comparable QIs performed better (p < 0.001) in EHR data on completeness. EHR data also proved to be correct; the relative change in indicator scores between EHR and survey data were small (<5 %) in three out of four QIs.ConclusionData quality of EHRs was sufficient to be used for the calculation of QIs, although comparability to survey data was problematic. Standardization is needed, not only to be able to compare different data collection methods properly, but also to compare between practices with different EHRs. EHRs have the option to administrate narrative data, but natural language processing tools are needed to quantify these text boxes. Such development, can narrow the comparability gap between scoring QIs based on EHR data and based on survey data.EHRs have the potential to provide real time feedback to professionals and quality measurements for research, but more effort is needed to create unambiguous and uniform information and to unlock written text in a standardized manner.Electronic supplementary materialThe online version of this article (doi:10.1186/s12911-016-0382-4) contains supplementary material, which is available to authorized users.

Highlights

  • With the emergence of the electronic health records (EHRs) as a pervasive healthcare information technology, new opportunities and challenges for use of clinical data for quality measurements arise with respect to data quality, data availability and comparability

  • Our study showed that changes in data collection methods from survey data to data extracted from EHRs had a major impact on the comparability of the content

  • Our study focused on the comparison of data quality, quality of care research would benefit from a closer look at other data properties to assess the added value of using the EHR as a data source for research purposes

Read more

Summary

Introduction

With the emergence of the electronic health records (EHRs) as a pervasive healthcare information technology, new opportunities and challenges for use of clinical data for quality measurements arise with respect to data quality, data availability and comparability. With the emergence of the electronic health records (EHRs) as a pervasive healthcare information technology, [1] new opportunities and challenges for use of clinical data arise with respect to data quality, data availability and comparability [2]. When extracting loose chunks of information from EHRs for quality measurements such a full picture of the patient case is not possible, the risk of bias is smaller It is questionable whether all data one can retrieve from survey items can be extracted from EHRs. A survey is designed to measure the quality of care, whereas most EHRs are developed for much broader purposes, such as administration, reporting and clinical reasoning. In a recent review on methods and dimensions of quality assessment of EHR data, out of 95 reviewed articles 57 conducted comparative research of which only nine compared EHR

Objectives
Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call