Abstract

In recent years, multilayer perceptron (MLP) has been successfully used for solving various problems in different fields. However, it is difficult to interpret the reasoning process of an MLP, and therefore in most cases the MLP is used as a black box. In our previous study, we tried to extract rules from a learned shallow MLP based on the hidden neuron outputs. In this study, we investigate the possibility of extracting simpler and better rules from a deep MLP. It is believed that hidden layers closer to the output layer can learn more abstract concepts. It is natural to expect that simpler and better rules can be extracted from higher layers. Experimental results on several public datasets reveal that this is true because the decision trees designed based on hidden layers closer to the output layer are actually smaller. That is, it is possible to extract more understandable knowledge from a deep MLP, even if the MLP as a whole is difficult to understand. In addition, based on the complexity of the extracted knowledge, it is also possible to determine the number of layers needed for solving a given problem.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call