Abstract

Neural network tree (NNTree) is a decision tree (DT) with each non-terminal node being an expert neural network (ENN). Compared with conventional DTs, NNTrees can achieve good performance with less nodes and the performance can be improved further by incremental learning with new data. Currently, we find that it is also possible to extract comprehensible rules more easily from NNTrees than from conventional neural networks if the number of inputs of each ENN are limited. Usually, the time complexity for interpreting a neural network increases exponentially with the number of inputs. If we adopt NNTrees with nodes of limited number of inputs, the time complexity for extracting rules can become polynomial. In this paper, we introduce three methods for feature selection when the number of inputs is limited. The effectiveness of these methods is verified through experiments with four databases taken from the machine learning repository of the University of California at Irvine.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call