Abstract

Recently, deep learning techniques based on electronic health record (EHR) data have achieved success in medical prediction. However, due to the complexity, heterogeneity nature of EHR data, most previous studies build models based on single-modal data (e.g. the structured data or the unstructured free-text data). Although some studies have trained the models based on multimodal EHR data and achieved more advanced performance, they still suffer from the clinical practicability problems, as they require separate modeling for each medical prediction task. Moreover, they ignore the potential correlation between clinical prediction tasks. In this work, we propose UniMed, a Unified model handles multiple Medical prediction tasks simultaneously by learning from multimodal EHR data. Our UniMed model encodes each input modality separately and uses a transformer decoder followed by task-specific prediction heads to predict each medical task. Experimental results conducted on publicly available EHR dataset demonstrate that there is a time-progressive correlation between medical prediction tasks and show the effectiveness of our method.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call