Abstract

In machine learning, interpretability refers to understand the underlying behavior of the prediction of a model in order to identify diagnosis criteria and/or new rules from its output. Interpretability contributes to increase the usability of the method. Also, it is relevant in decision support systems, such as in medical applications. White-box models like tree-based, rule-based and linear models are considered the most comprehensible, but less accurate or simplistic. In contrast, black-box models like nonlinear and ensemble models are more accurate hence more complex to interpret. Thus, a trade-off between accuracy and interpretability is often made when building models to support human experts in a decision-making process. Artificial hydrocarbon networks (AHN) is a supervised learning method that has been proved to be very effective for regression and classification problems. In fact, its training process suggests a kind of interpretability. Thus, the objective of this work is to present first efforts proving the capacity of artificial hydrocarbon networks (AHN) to deliver interpretable models. In order to assess the interpretability of AHN, we address the breast cancer problem using a public dataset. Results showed that AHN can be transformed in treebased and rule-based models preserving high accuracy in the output classification.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call