Abstract

Decision tree (DT) is one of the most popular approaches for machine learning. Using DTs, we can extract comprehensible decision rules, and make decisions based only on useful features. The drawback is that, once a DT is designed, there is no free parameter for further development. On the contrary, a neural network (NN) is adaptable or learnable, but the number of free parameters is usually too large to be determined efficiently. To have the advantages of both approaches, it is important to combine them together. Among many ways for combining NNs and DTs, this paper introduces a neural network tree (NNTree). An NNTree is a decision tree with each node being an expert neural network (ENN). The overall tree structure can be designed by following the same procedure as used in designing a conventional DT. Each node (an ENN) can be designed using genetic algorithms (GAs). Thus, the NNTree also provides a way for integrating DT, NN and GA. Through experiments with a digit recognition problem we show that NNTrees are more efficient than traditional DTs in the sense that higher recognition rate can be achieved with less nodes. Further more, if the fitness function for each node is defined properly, better generalization ability can also be achieved.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call