Abstract

Intensive care units provide adata-rich environment with the potential to generate datasets in the realm of big data, which could be utilized to train powerful machine learning (ML) models. However, the currently available datasets are too small and exhibit too little diversity due to their limitation to individual hospitals. This lack of extensive and varied datasets is aprimary reason for the limited generalizability and resulting low clinical utility of current ML models. Often, these models are based on data from single centers and suffer from poor external validity. There is an urgent need for the development of large-scale, multicentric, and multinational datasets. Ensuring data protection and minimizing re-identification risks pose central challenges in this process. The "Amsterdam University Medical Center database (AmsterdamUMCdb)" and the "Salzburg Intensive Care database (SICdb)" demonstrate that open access datasets are possible in Europe while complying with the data protection regulations of the General Data Protection Regulation (GDPR). Another challenge in building intensive care datasets is the absence of semantic definitions in the source data and the heterogeneity of data formats. Establishing binding industry standards for the semantic definition is crucial to ensure seamless semantic interoperability between datasets.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call