Abstract

Effective healthcare resource allocation is critical for intelligent medical systems, and accurate in-hospital resource utilization prediction from medical records is a prerequisite. Existing methods for this task usually rely on manual feature engineering which needs massive domain knowledge, and do not exploit the textual information in electronic medical records, e.g., diagnosis and operation texts. In this paper, we propose a deep in-hospital resource utilization prediction approach to jointly estimate the in-hospital costs and length of stays from patients’ admission records via multi-task learning. Our approach can exploit the heterogeneous information in records, such as patient features, diagnosis/operation texts, and the diagnosis/operation IDs, via a multi-view learning framework, where Transformers are used to learn the representations of words, diagnoses and operations. In addition, we design a diagnosis–operation attention network to capture the relations between diagnoses and operations. Besides, since different words, diagnoses and operations have different importance for cost estimation, we incorporate a hierarchical attention network to select important words, diagnoses and operations for learning informative record representations. Extensive experiments on a real-world medical dataset validate the effectiveness of our approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.