Abstract

The crucial role played by interpretability in many practical scenarios has led a large part of the research on machine learning towards the development of interpretable approaches. In this work, we present PRL, a game-theory-based method capable of achieving state-of-the-art accuracy, yet keeping the focus on the interpretability of the predictions. The proposed approach is an instance of the more general preference learning framework. By design, the method identifies the most relevant features even when dealing with high-dimensional problems. This is possible thanks to an online features generation mechanism. Moreover, the algorithm is proven to be theoretically well-founded, thanks to a game-theoretical analysis of its convergence. To assess the quality of the proposed approach, we compared PRL against state-of-the-art methods in a plethora of different classification settings. The experimental evaluation focuses on interpretability, with an in-depth analysis of visualization, feature selection, and explainability.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call