Abstract

Current clinical practice relies heavily on technology to support decision-making. In particular, machine learning is increasingly used in decision support systems. This can be attributed to information overload, a fact that clinicians cannot consider all available information. The disadvantage of this method is that this kind of Clinical Decision Support Systems (CDSSs) is usually a black box, and it can’t understand its decision-making reasons. However, in a healthcare environment, trust and accountability are important issues, and such systems should best be interpretable. In contrast, other areas rely almost entirely on observational or subjective patient reported questionnaires to quantify medical conditions. Developers need to use cognitive science based Human-Computer Interaction (HCI) research methods to design practice models, including user-centered iterative design and common standards. The main work of this paper is to propose a clinical decision support model with enhanced interpretability, including an automated interface generation engine. In the design of personalized decision-making, enhance the universality of decision-making push. The clinical evidence is input and displayed in the form of tables, and the medical concepts and their matching with SNOMED CT terms, consistent navigation, and finally displayed in the form of knowledge spectrum. Enhance the flexibility of interaction and integrate workflow seamlessly. As a result, domain experts can get advice quickly and take appropriate actions at convenient points in the workflow without additional effort or delay. Optimizing the interaction and availability of CDSS with providers can enhance the use of CDSS. The iterative design of CDSS improves the usability of the system and the user’s popularity score. Our analysis shows that modern machine learning methods can provide interpretability compatible with domain Interpretation Knowledge Base (IKB) and traditional method ranking. Future work should focus on replicating these findings in other datasets and further testing different interpretable methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.