Abstract

The last decade produced rapid developments and powerful new technologies that are creating a huge upsurge in artificial intelligence research. However, for critical operational decisions (e.g., consulting services), the need for explanations and interpretable results are becoming a necessity. The integration of knowledge graphs that provide relevant background knowledge in machine-readable form, and machine learning methods represents a new form of hybrid intelligent systems that benefit from each other's strengths. Our research aims at an explainable system with a specific knowledge graph architecture that can generate human-understandable results even when no suitable domain experts are available. Against this background, the interpretability of a knowledge graph-based explainable artificial intelligence approach for business process analysis is focused. We design a framework of interpretation, and show how interpretable models are generated. Result paths on weaknesses and improvement measures related to a business process are used to produce stochastic decision trees, which improve the interpretability of results. This can lead to interesting consulting self-services for clients or be applied as a device for accelerating classical consulting projects.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.