Abstract

AbstractElectronic Health Records (EHRs) contain rich information of patient health status, which usually include both structured and unstructured data. There have been many studies focusing on distilling valuable information from structured data, such as disease codes, laboratory test results, and treatments. However, relying on structured data only might be insufficient in reflecting patients’ comprehensive information and such data may occasionally contain erroneous records. With the recent advances of machine learning (ML) and deep learning (DL) techniques, an increasing number of studies seek to obtain more accurate results by incorporating unstructured free-text data as well. This paper reviews studies that use multimodal data, i.e. a combination of structured and unstructured data, from EHRs as input for conventional ML or DL models to address the targeted tasks. We searched in the Institute of Electrical and Electronics Engineers (IEEE) Digital Library, PubMed, and Association for Computing Machinery (ACM) Digital Library for articles related to ML and DL-based multimodal EHR studies. With the final 94 included studies, we focus on how EHR data from different modalities were combined and interacted using conventional ML and DL techniques, the influence of the relationship between multimodal data on their fusion strategy, and how these algorithms were applied in EHR-related tasks. Further, we investigate the advantages and limitations of these fusion methods and indicate future directions for ML-based multimodal EHR research.KeywordsMachine learningDeep learningElectronic health recordsMultimodal

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call