Abstract

Neural network tree (NNTree) is a modular neural network with the overall structure being a decision tree (DT), and each non-terminal node being an expert neural network (ENN). One advantage of using NNTrees is that they are actually gray-boxes because they can be interpreted easily if the number of inputs for each ENN is limited. To design interpretable NNTrees, we have proposed a multiple objective optimization based genetic algorithm. This algorithm, however, is good only for solving problems with binary inputs. In this paper, we propose a method to solve problems with continuous inputs. The basic idea is to find a small number of critical points for each continuous input using self-organized learning, and quantize the input using the critical points. Experimental results with several public databases show that the NNTrees built from the quantized data are much more interpretable, and in most cases they are as good as those obtained from the original data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call