Abstract

More and more healthcare data are becoming readily available nowadays. These data can help the healthcare professionals and patient themselves to better understand the patient status and potentially lead to improved care quality. However, the analysis of these data are challenging because they are large-scale and heterogeneous, high-dimensional and sparse, temporal but irregularly sampled. In this paper, we propose a method called Double Core Memory Networks (DCMN) to integrate information from different modalities of the longitudinal patient data and learn a joint patient representation effective for downstream analytical tasks such as risk prediction. DCMN is designed not only to disentangle the temporal and non-linear intra-modal dependencies for the data within each modality but also to capture the long-term inter-modal interactions. DCMN models are the end-to-end memory networks with two external memory cores where each modality of data is compressed and stored. Each memory core has an information-flow controller named query to interact with an external memory module. In addition, we incorporate a gating mechanism into basic DCMN model to perform dynamic regulation of memory interaction. DCMN models have multiple computational layers (hops) allowing data of different modalities interacting with each other recurrently along with a mechanism of alternating access of external memory for each memory core hop-by-hop. We evaluate DCMN models on two outcome prediction tasks, including a mortality prediction on the public Medical Information Mart for Intensive Care III (MIMIC-III) database and a cost prediction on the Hospital Quality Monitoring System (HQMS) dataset. Experimental results demonstrate that our DCMN models are more competitive over the baseline methods in the multimodal prediction setting.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call