Abstract

In the field of eXplainable Artificial Intelligence (XAI), the generation of interpretable models that are able to match the performance of state-of-the-art deep learning methods is one of the main challenges. In this work, we present a novel interpretable model for image classification that combines the power of deep convolutional networks and the transparency of decision trees. We explore different training techniques where convolutional networks and decision trees can be trained together using gradient-based optimization methods as usually done in deep learning environments. All of this results in a transparent model in which a soft decision tree makes the final classification based on human-understandable concepts that are extracted by a convolutional neural network. We tested the proposed solution on two challenge image classification datasets and compared them with the state-of-the-art approaches, achieving competitive results.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.