Abstract

Thanks to the huge accumulation of Electronic Health Records (EHRs), numerous deep learning based predictive models were proposed for this task. Among them, most of the existing state-of-the-art (SOTA) models were built with recurrent neural networks (RNNs). Regardless of their success, RNN-based models mainly suffer from three limitations. (i) Accuracy: the prediction accuracy of RNN-based models drops quickly as the length of EHR sequences increases. (ii) Efficiency: the recurrence property of RNN-based models makes the computation parallelization impossible, and accordingly hurts the efficiency of such models in practice. (iii) Interpretability: the outputs of RNN-based models are difficult to explain due to the unexplainable nature of deep models. In this paper, we resort to the recently advanced attention mechanism to model the dependencies between inputs and outputs, which overcomes shortages of RNN-based models in accuracy and efficiency. As for interpretability, we model the relationships with two linear mappings from the input to the output, which account for two important factors—one is for context-aware information and the other is for time-aware representation—of capturing discriminative features in learning patient’s representations. We empirically demonstrate the effectiveness of the proposed model in both accuracy and computational efficiency, meanwhile, analyze and discuss the reasonability of each explanation approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call