Abstract

ObjectivePatient information can be retrieved more efficiently in electronic medical record (EMR) systems by using machine learning models that predict which information a physician will seek in a clinical context. However, information-seeking behavior varies across EMR users. To explicitly account for this variability, we derived hierarchical models and compared their performance to nonhierarchical models in identifying relevant patient information in intensive care unit (ICU) cases.Materials and methodsCritical care physicians reviewed ICU patient cases and selected data items relevant for presenting at morning rounds. Using patient EMR data as predictors, we derived hierarchical logistic regression (HLR) and standard logistic regression (LR) models to predict their relevance.ResultsIn 73 pairs of HLR and LR models, the HLR models achieved an area under the receiver operating characteristic curve of 0.81, 95% confidence interval (CI) [0.80–0.82], which was statistically significantly higher than that of LR models (0.75, 95% CI [0.74–0.76]). Further, the HLR models achieved statistically significantly lower expected calibration error (0.07, 95% CI [0.06–0.08]) than LR models (0.16, 95% CI [0.14–0.17]).DiscussionThe physician reviewers demonstrated variability in selecting relevant data. Our results show that HLR models perform significantly better than LR models with respect to both discrimination and calibration. This is likely due to explicitly modeling physician-related variability.ConclusionHierarchical models can yield better performance when there is physician-related variability as in the case of identifying relevant information in the EMR.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call