Abstract

Effective healthcare resource allocation is critical for intelligent medical systems, and accurate in-hospital resource utilization prediction from medical records is a prerequisite. Existing methods for this task usually rely on manual feature engineering which needs massive domain knowledge, and do not exploit the textual information in electronic medical records, e.g., diagnosis and operation texts. In this paper, we propose a deep in-hospital resource utilization prediction approach to jointly estimate the in-hospital costs and length of stays from patients’ admission records via multi-task learning. Our approach can exploit the heterogeneous information in records, such as patient features, diagnosis/operation texts, and the diagnosis/operation IDs, via a multi-view learning framework, where Transformers are used to learn the representations of words, diagnoses and operations. In addition, we design a diagnosis–operation attention network to capture the relations between diagnoses and operations. Besides, since different words, diagnoses and operations have different importance for cost estimation, we incorporate a hierarchical attention network to select important words, diagnoses and operations for learning informative record representations. Extensive experiments on a real-world medical dataset validate the effectiveness of our approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call