Abstract

A crucial characteristic of machine learning models in various domains (such as medical diagnosis, financial analysis, or real-time process monitoring) is the interpretability. The interpretation supports humans in understanding the meaning behind every single prediction made by the machine, and enables the user to assess trustworthiness before acting on the predictions. This article presents our work in building an interpretable classification model based on association rule mining and multi-objective optimization. The classification model itself is a rule list, making a single prediction based on multiple rules. The rule list consists of If ... THEN statements that are understandable to humans. We choose these rules from a large set of pre-mined rules according to an interestingness measure which is formulated as a function of basic probabilities related to the rules. We learned the interestingness measure through multi-objective optimization, concentrating on two objectives: the classifier’s size in terms of number of rules and prediction accuracy. The model is called MoMAC, “Multi-Objective optimization to combine Multiple Association rules into an interpretable Classification”. The experimental results on benchmark datasets demonstrate that MoMAC outperforms other existing rule-based classification methods in terms of classification accuracy.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call