Abstract

Current Artificial Intelligence (AI) technologies are widely regarded as black boxes, whose internal structures are not inherently transparent, even though they provide powerful prediction capabilities. Having a transparent model that enables users to understand its inner workings allows them to appreciate the learning and inference process, leading to trust and higher confidence in the model. While methods that help with interpretability have been created, most of them require the user to have a certain level of AI knowledge and do not allow a user to fine-tune them based on prior knowledge. In this paper, we present a smart interpretable model (SIM) framework that requires little to no AI knowledge and can be used to create a set of fuzzy IF-THEN rules along with its corresponding membership functions at ease. The framework also allows users to incorporate prior knowledge during various steps in the framework and generates comprehensive insights summarized from rules and samples, allowing users to identify anomalous rules, feature contributions of each sample and confidence level for each rule. We demonstrate these capabilities and compare our model to other existing rule-based models using various datasets that have been used for rule-based model validations. Validations are then done in terms of performance and whether the rules that are generated by SIM are similar to the rules generated by other more recent rule-based models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call