Abstract

Machine learning started as an academic-oriented domain, but nowadays it is becoming more and more widespread across diverse domains, such as retail, healthcare, finance, and many more. This non-academic face of machine learning creates a new set of challenges. The usage of such complex methods by non-expert users has increased the need for interpretable models. To this end, in this paper we propose an approach for extracting explanation rules from support vector machines. The core idea is based on using kernels with feature spaces composed by logical propositions. On top of that, a searching algorithm tries to retrieve the most relevant features/rules that can be used to explain the trained model. Experiments on both categorical and real-valued datasets show the effectiveness of the proposed approach.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call