Abstract

Despite the widespread use of the “Informatics for Integrating Biology and the Bedside” (i2b2) platform, there are substantial challenges for loading electronic health records (EHR) into i2b2 and for querying i2b2. We have previously presented a simplified framework for semantic abstraction of EHR records into i2b2. Building on our previous work, we have created a proof-of-concept implementation of cloud services on an i2b2 data store for cohort identification. Specifically, we have implemented a graphical user interface (GUI) that declares the key components for data import, transformation, and query of EHR data. The GUI integrates with Azure cloud services to create data pipelines for importing EHR data into i2b2, creation of derived facts, and querying for generating Sankey-like flow diagrams that characterize the patient cohorts. We have evaluated the implementation using the real-world MIMIC-III dataset. We discuss the key features of this implementation and direction for future work, which will advance the efforts of the research community for patient cohort identification.

Highlights

  • I2b2 has been widely deployed to enable researchers to identify patient cohorts for clinical studies [1, 2]

  • The main contribution of this paper is to describe the functionality needed to use cloud services to load and transform electronic health records (EHR) data into an i2b2 patient store

  • The graphical user interface (GUI) are connected at the back end to the Azure Data Factory application programming interface (API)

Read more

Summary

Introduction

I2b2 has been widely deployed to enable researchers to identify patient cohorts for clinical studies [1, 2]. Despite the widespread use of the i2b2 platform, there remain substantial challenges for importing EHR data into i2b2 and for querying the data in i2b2. There currently exist no good practice guidelines or tooling that information technology (IT) teams can use to import EHR data into i2b2, and the IT team faces a steep learning curve to understand the i2b2 web services and database schema to load data into i2b2 [5, 6]. Due to lack of tooling, IT teams resort to ad hoc methods to import the data. They develop data import pipelines and perform the “Devops” tasks to create and manage the computational environment for running the pipelines.

Methods
Results
Discussion
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.