Abstract

Accompanying the growth of applications that use Artificial Intelligence, recent research is also growing to explain how these applications work and make them more acceptable to humans. In this context, this paper presents an alternative explanation of the data classification process carried out by an Artificial Intelligence algorithm. The work proposes a pixel-oriented information visualization approach to explain the multilayer perceptron classifier using SHAP. Observing the results obtained in Shapley values, for the Iris dataset composed of four features, the proposed methodology identified one relevant feature, and for the Wine dataset composed of 13 features, the methodology identified six relevant features. The relevant features are those that most influence the classification, this information explains the results as it is possible to understand the reasons for the classifier's successes and errors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call