Abstract

Neural network tree (NNTree) is a hybrid learning model with the overall structure being a decision tree (DT) and each non-terminal node being an expert neural network (ENN). So far we have shown through experiments that NNTrees are not only learnable, but also interpretable if the number of inputs for each ENN is limited. Therefore, NNTrees might be an efficient model for unifying both learning and understanding. One important problem is that even if an NNTree is interpretable, the rules extracted from it may not be understandable because they may contain too many details. To solve this problem, we propose a new type of NNTrees in which a multi-template matcher (MTM) is used for each ENN instead of a multilayer perceptron (MLP). In this model, each template can be used as a previous case, and an MTM-NNTree can be understood straightforwardly. In this paper, we provide an evolutionary algorithm for designing MTM-NNTrees, and show through experiments that the MTM-NNTrees are as powerful as MLP-NNTrees.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call